There’s a difference between the customer that calls in and says, “Hello, my schedule changed and I need to reschedule my flight next month” and the passenger that calls in and says, “Omygosh, my flight has been canceled, I need to get on a new flight ASAP!”
So why do we treat them the same in automated voice and chat conversations?
“Sentiment analysis for text and voice has been around for years,” VentureBeat points out. “Any time you call a customer service line or contact center and hear “this call is being recorded for quality assurance,” for example, you’re experiencing what has become highly sophisticated, AI-driven conversational analysis.”
The emotion may be detected, but most conversational AI still responds with the same canned answer. “Ok, let me help you with that.” These standard responses during times of customer stress or frustration can further erode customer satisfaction and miss a key opportunity to build customer trust and loyalty.
However, generative AI is changing the game. The addition of Generative AI is enabling not only better emotion detection but also better responses catered to that sentiment detected. InflectionAI’s new “kind” chatbot is already taking advantage of this new skillset by suggesting users seek professional help if they mention some key terms connected with mental health.
Our platform, Conversations by NLX, enables brands to offer empathetic responses to customers' frustrating situations (e.g., "My flight was canceled, and I need to be rebooked urgently") by using pre-approved prompts to safely reframe AI-generated responses. For example, the prompt "I can help you change your flight" can be transformed into a more compassionate response, such as "We apologize for the inconvenience caused by the canceled flight. Rest assured that I can assist you in changing your flight." That frustrated passenger? They’re already sighing a breath of relief with that kind of empathetic response!
While we do encourage the use of generative AI for more contextual responses, we do not encourage the unhinged use of generative AI responses (ex. what a ChatGPT or GPT-3 might respond with out-of-the-box). Harvard Business Review warns broadly about the risks of Using AI to Interpret Human Emotions, including conscious or unconscious emotional bias within AI that can perpetuate stereotypes and assumptions at an unprecedented scale. This is on top of several examples of the risks of using generative AI without guardrails, such as that of Kevin Roose’s “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” and my example of a real business problem faced by some enterprises.
Our guardrails instead enable brands to confine responses through the strict lens of their own knowledge base articles. The addition of LLMs – whether it’s ChatGPT, Cohere, AlexaTM 20B, or something else – helps improve the response based on the initial guideline your brand offers. And the cumulative effect is a better, more empathetic customer experience.
Granted, it’s important to remember that non-sentient technologies like generative and conversational AI are not experiencing empathy themselves when giving a more empathetic response. “The performance of empathy is not empathy,” Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, tells the New York Times. However, the performance of empathy for a better customer experience is better than an experience devoid of empathy that creates instant frustration.
For more information about how you can get started creating the perfect automated customer experience today, please reach out to us via the contact us tab.