On No Jitter, my colleague Beth Schultz recently took note of a couple announcements in the contact center-AI space, both of which address the challenge of making voice interactions smoother for customers calling into the contact center.
We tend to think about the challenges of communicating with AI as revolving around the way that the AI “thinks.” On the one hand, the Holy Grail has been AI that can pass a Turing test, essentially being indistinguishable from a human interaction. On the other hand, if it gets too close to human, like Google Duplex, that tends to creep us out, and we want the system to inform us point-blank that we’re talking to an AI system.
It’s important to remember that the scenario of customers talking to contact center AI isn’t necessarily the leading application. Instead, AI is more likely to be embedded in the contact center infrastructure, with applications like crunching large amounts of customer data in near-real-time to get callers to the right agent faster than any ACD ever could.
But to the extent that customers might want or need to talk to a chatbot—if they’re in a hands-free situation, say—no matter how “human” the AI seems, the interaction will still fail if it doesn’t take the humanity of the caller into account.
For example, the startup that Beth writes about, Replicant, built its voice AI platform, Thinking Machine, to consider not just Natural Language Processing and Speech Recognition, but the rhythms of conversation that humans require. Replicant developers focused on latency and accuracy, Replicant CEO Gadi Shamia told Beth. Their goal was to create an engine that could determine how to respond to a customer comment within 20 milliseconds, “‘because we need to send the audio file down that phone line and we want to make sure we never have more than one-second latency,’” he had explained.
So it’s not enough to understand the customer’s meaning correctly, process it, and produce a meaningful, accurate response. Replicant has figured out that if the system takes too long to answer, the caller will either give up, repeat the question, or otherwise respond with frustration—defeating the purpose of automating the response.
The other announcement Beth writes about is from Inference Solutions, whose Studio 6.2 release features WhatsApp integration along with several other features. One of those features is an improved ability to deal with noisy environments: Another real-world factor that will defeat AI-assisted customer contact if it’s not taken into account as part of product designs.
So AI is expanding in many different directions at once as it begins to be deployed in contact center systems. To my earlier point about the “behind-the-scenes” role of AI in the contact center, you can get a perspective on this side of the issue in an upcoming Enterprise Connect webinar featuring Sheila McGee-Smith, the leading analyst in this space. Sheila’s topic is “Designing Customer Experience for Scalability and Agility,” and one of the points she’ll discuss is the role AI can play in delivering the kind of flexibility that contact centers need when it comes to resource allocation. You can register for the webinar here.