I was in Berlin last week for some corporate meetings, and one evening we visited one of the city’s famous Christmas markets, where I encountered a stand at which a woman was frying donuts. The sign identified this pastry as “Apfelkrapfen”—apple…Krapfs.
Of course I was thrilled and first I took a picture, which seemed to irritate the woman in the stall. Then I approached her and proudly informed her, in my bad German, “MY NAME IS ERIC DONUT!” As I wasn’t even there to buy one of her pastries, this declaration only seemed to aggravate her more.
If you’re of a certain age, this little vignette may remind you of the debate over whether John F. Kennedy, when he came to the divided city in 1963 and proclaimed, “Ich bin ein Berliner,” was understood by the natives to be boasting, “I am a jelly doughnut.” What I thought of, later, was my daughter, whose after-school job is clerking in a New-Agey store that sells things like aromatherapy and crystals. If someone walked in, pointed at the crystals and proudly proclaimed, “My name is Eric Crystal,” she would be as nonplussed as the woman I encountered in Berlin.
All of which made me realize that we don’t need Artificial Intelligence to screw up our communications sometimes. We’re perfectly capable of doing it with our very own squishy human brains and over-loud voices. The problem at this Berlin Christmas market wasn’t that either I or the woman in the stall were misunderstanding each other; we just had different ideas about what the purpose of our communication was. I wanted her to be impressed that I had traveled 4,000 miles to discover that I owned the same name as her product; she wanted me to either buy a donut from her or get out of the way so someone else could.
It’s not much of a leap to see how this fits into the discussions we’re having in our industry these days. Companies like Google are doing amazing things with real-time language translation, and almost every major company in the enterprise communications space is implementing AI into its products in the hope of creating breakthroughs in how our systems route connections between people to deliver pieces of information back and forth. The natural place for this to start is in the contact center, where smart automatic call distributors (ACDs), aided by information gathered from Interactive Voice Response (IVR) systems, have been doing a less-granular version of this job for years.
At Enterprise Connect Orlando 2018, Brent Kelly of KelCor will be leading a session aimed at helping our attendees understand how AI really works, and thus how it may be able to make our communications systems even more powerful and effective at doing their jobs of making connections. Brent has a post on No Jitter this week where he describes how one company, Afiniti, is creating such advanced systems based on AI. We’ve also got sessions on these topics within our contact center track.
I think we’re reaching the point where you can’t ignore emerging technologies like AI any more. Your enterprise may be at one end of a continuum or the other when it comes to actually implementing AI-based capabilities. But speech technologies (on which we’ve launched a new track at EC18) have been a constant theme throughout this year in high tech—from the start of the year, when they dominated the Consumer Electronics Show, to yearend, as these products show up in holiday advertising. And even within our industry, Amazon used its November reinvent show to launch Alexa for Work.
So I encourage you to register for Enterprise Connect Orlando 2018 and come down to sunny Florida the week of March 12. Face-to-face interaction is always valuable—you might even learn something you didn’t know.