Conversational AI: What’s Real, and What’s Hype?

Openstream.ai BrainBlog 1 HypeAn Intellyx BrainBlog for Openstream.ai

… “I didn’t understand that command. Thank you for calling …”

“AGENT! REPRESENTATIVE! AGENT!”

A sad sort of human-to-computer stalemate has played out over countless fruitless interactions. Companies adopted IVR (interactive voice response) systems over the last decades, possibly in an attempt to reduce the cost of hiring and training costs for human customer service reps that have high turnover rates.

Or, perhaps some forward-thinking executives thought a robot-voiced CSR would make a company appear more ‘advanced’ in comparison to its competitors.

Whatever the reason, our earliest conversations with IVR menus and chatbots left most of us humans feeling let down, like we weren’t having a conversation at all. Despite the fact that voice recognition and computer speech have improved dramatically in speed and sophistication, it’s hard for some of us to shake that feeling that nobody is on the other end of the line to help.

Conversational AI includes a broad range of technologies that seek to make humans and computer systems work better together, by training software to understand and communicate with people using a natural language conversation as the interface.

Recognizing the conversational chasm

Early chatbot and voice systems operated like audio versions of text menus and decision trees. The system ‘agent’ asks questions of the customer, who provides responses from a limited set of options that the agent can recognize to fill out the data fields in a form, or advance the customer to the next menu of options.

While sophistication has improved, until very recently, most of our interactions with chatbots and translators weren’t powered by AI at all. Process mining and workflow tools, or dedicated interactive design and development teams could construct complex sequences of user preferences and behaviors, thereby enhancing the customer experience of any other software.

Just a decade ago in 2012, the Recurrent Neural Network (or RNN) method of AI arrived on the scene. RNN created short feedback loops for AI to learn from, and chat routines could use it to algorithmically improve responses to any query, including text and audio cues.

Text recognition blossomed into better natural language understanding (or NLU), which provided a better semantic understanding of written and spoken words – their definition, declension and tense within the language’s grammar – which is also the basis of language development in humans.

AI was finally starting to cross the conversational chasm between people and systems, but something was still missing…

Read the entire BrainBlog (Part 1 in the series) at Openstream.ai here >

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug