Artificial Intelligence Getting You Down? Try Augmented Intelligence

Openstream AI blog 2 augmented intelligence

In the decades since Alan Turing proposed his Imitation Game – the ‘Turing Test’ for artificial intelligence (AI) – people have been confused over the very purpose of AI itself.

At issue: whether the point of AI is to simulate human cognition and communication so seamlessly that it can fool people into thinking they are interacting with a human being, rather than a piece of software.

Such deception was never the point of Turing’s exercise. Rather, he realized that there was no way to define general intelligence, and thus no way to test for it. So, he came up with the game as a substitute – a test people could apply to conversational systems.

We may not be hiding today’s AI behind the Turing Test’s curtain as a rule, but many innovators still use the ‘good enough to fool people’ metric as a goal of their software.

Just one problem: nobody likes to be fooled.

Do We Want Virtual Assistants to Fool Us?

In many cases, conversational AI technologies like virtual assistants actively seek to fool people. If you call a big company’s toll-free number, the reasoning goes, then you, the consumer, will have a better experience if the voice on the other end of the line can carry on a real conversation.

Here’s the rub: in reality, there’s a line between human-like and human-fooling behavior, and if a virtual assistant crosses the line, it becomes annoying.

If that smooth voice apologizes for not understanding me, with a touch of contrition in the tone of its voice, it only makes users feel better if they believe that the assistant understands their dilemma. Otherwise, it couldn’t be a true apology. Such behavior can be worse than silence.

What Do We Want from AI-Driven Conversations?

What we want from such voice interfaces is helpfulness, just as anyone would want when they ask their phone for directions.

Additional verbiage or emotional nuance may ease my mind, but not if it’s solely meant to fool me into thinking I’m interacting with a human. For example, when I’m filing an insurance claim, it is important for a virtual assistant to acknowledge the loss associated with that claim, since it is key for any assistant to understand the scope of loss.

Indeed, when people interact with a virtual assistant, rather than simply chatting, they are often looking to accomplish a particular goal. In many situations, the goal may be to simply obtain an answer to a question, where the answer is based upon facts. For example, what will the weather be like tomorrow or how much money do I have in my bank account?

The focus of such AI, therefore, should be collecting, correlating, and analyzing information about the world in order to present answers to questions within whatever area of concern the virtual assistant focuses on. Sometimes, those aspects of the world or the events in the caller’s life deserve recognition as in the example above.

Often, however, the human’s goal is to not merely ask a question, but to complete a task: for example, pay my electric bill or approve a request from a colleague. An important requirement for the AI in such situations is context. Which account is my electric bill? Do I normally pay with a credit card or bank account? What considerations might impact the approval of a request?

In such situations, dialogue between the human and the virtual assistant is necessary, and must follow a plan that will ultimately lead to the completion of the task.

Openstream.ai is a good example of a software company that understands the importance of such knowledge-driven, contextually-situated, goal-oriented AI. The conversational technology in Openstream’s Eva focuses on helping the user accomplish their goals and establishing the conditions necessary to achieve them.

Eva offers a goal-based dialogue engine with pre-trained industry and domain vocabularies and securely extracts knowledge from documents, conversations, and other multimedia sources.

The product is also context-aware, with the ability to respond to situations and complete tasks by recognizing the user’s goals and executing plans, accounting for past user behavior and the context of the situation.

The Intellyx Take

When an AI technology focuses solely on mimicking human behavior, its creators are essentially saying that they are trying to develop technology that is proficient at the same tasks that humans are, thus limiting their potential power and efficacy.

In contrast, AI will be more powerful if it excels at qualities humans might wish to have for ourselves, but do not. Mimicry can be a barrier to achieving that vision unless cognitive agents are able to explain their reasoning and discuss their motivations for taking the actions they take.

Ultimately, no matter how exceptional the AI, such technology will never replace humans but rather complement them. Such technology must be able to understand the desires of their users and produce a comprehensive experience that is attuned to how humans perceive the world.

In fact, we would suggest stressing that the goal of AI is not to provide ‘Artificial Intelligence’ but to ‘Augment Human Intelligence.’ When we say artificial, we mean human made, as opposed to the natural intelligence that humans have. This distinction identifies two kinds of intelligence which the industry has now pitted against each other.

Augmenting intelligence is a different strategy from mimicry-centric AI. Augmented intelligence reinforces the idea that the only intelligence humans have is the real, live, natural intelligence we are born with. AI – no matter how smart it gets – can only serve to augment it.

Furthermore, augmented intelligence connotes that AI is a human tool – a tool we might use for good or evil like all the other tools at our disposal, but a tool in human hands nevertheless.

And that’s a good thing. Would you rather AI be a tool in human hands, or the other way around?

Copyright © Intellyx LLC. Openstream is an Intellyx customer. Intellyx retains final editorial control of this article.

SHARE THIS: