Should AI Fool You? Think Again

In the 67 years since Alan Turing proposed his Imitation Game – the infamous ‘Turing test’ for artificial intelligence (AI) – people have been confused over the very purpose of AI itself.

At issue: whether the point of AI is to simulate human behavior so seamlessly that it can fool people into thinking they are actually interacting with a human being, rather than a piece of software.

Such deception was never the point of Turing’s exercise, however. Rather, he realized that there was no way to define true intelligence, and thus no way to test for it. So he came up with the game as a substitute – something people could theoretically test for.

Regardless of Turing’s intentions, setting the bar for AI based on its ability to snooker an audience has become fully ingrained in our culture, thanks in large part to Hollywood.

The AI We Love – and Love to Hate

Metropolis debuted the year Alan Turing turned fifteen. Did it influence his choice of career? We can only wonder.
Metropolis debuted the year Alan Turing turned fifteen. Did it influence his choice of career? We can only wonder.

Ever since the 1927 film Metropolis, filmmakers have realized that humans had to play the role of any intelligent machine – simply because having a machine do its own acting made for bad theater.

We simply want and expect humans to play all the characters in our entertainment, regardless of whether they appear to be machines or animals or celestial beings, or any other character, anthropomorphic or not.

It’s no wonder, then, that we crave AI as intelligent as Star Trek’s Lt. Cmdr. Data or Star Wars’ C3PO, and we fear AI the likes of The Terminator’s Skynet or Hal 9000 from 2001: A Space Odyssey. For good or evil, our context for AI is a machine that has convincing human traits, because we require a human to play the part.

A popular trope in such productions is the argument over whether such a computer is truly a self-aware, sentient being – or is it simply programmed to act that way.

If the former, then we must assign it the rights we assign humans. If the latter, then it is merely a machine, unworthy of even the most basic courtesies due a human. After all, there’s no point in thanking or cursing a machine, is there?

Turing realized that he couldn’t answer this question, even if he had the luxury of a tête-à-tête with Data and C3PO in person. Instead, he proposed the Imitation Game as a thought exercise to suggest a question he could answer – not as a goal of AI.

Today’s AI technology, in any case, is nowhere near Data or C3PO or Skynet or any of the other human-like, AI-driven machines of fiction. Nevertheless, in spite of Turing’s true intentions, the goal of human-like behavior sufficiently accurate to fool people remains one of the primary goals of many AI initiatives, for better or worse – and mostly for the worse.

‘Human-Fooling’ vs. ‘Human-Like’ Behavior

We may not be hiding today’s AI behind the Turing test’s curtain as a rule, but many innovators still use the ‘good enough to fool people’ metric as a goal of their software.

However, if we take a closer look at the current state of the AI market, it’s clear that there’s a difference between merely ‘human-like’ behavior and behavior that could actually fool people into thinking the AI was actually a person.

Controlling our smartphones or Amazon Echoes with voice commands are examples. Yes, such devices answer with a human-like voice, but their creators aren’t trying to fool anyone that such devices are actually sentient – nor should they.

Image recognition, including facial recognition, is another example. Yes, we appreciate the human-like ability when a computer can identify a person in a video, again without the expectation that we’re fooling anyone into believing such software has anything resembling human intelligence.

At the other extreme, technologies like virtual assistants actively seek to fool people. If you call a big company’s toll-free number, the reasoning goes, then you, the consumer, will have a better experience if the voice on the other end of the line can carry on a real conversation.

Here’s the rub: in reality, there’s a line between human-like and human-fooling behavior, and if a virtual assistant crosses the line, it simply becomes annoying.

If that smooth voice apologizes for not understanding me, with a touch of contrition in the tone of its voice, I don’t actually feel better – because it’s not a true apology. It is by definition spurious. Nobody is actually sorry.

What we actually want from such voice interfaces is language understanding, accuracy, and efficiency, as anyone would want when they ask their phone for directions. I don’t want additional verbiage or emotional nuance solely meant to fool me into thinking I’m interacting with a human.

Do We Want ‘Artificial Stupidity’?

Predictably, various organizations have staged Turing tests over the years, offering prizes to the program best able to fool people into thinking it was human. In 1991, a simplistic program won the first Loebner Prize for AI in large part because it shrewdly inserted typos into its output.

The hapless humans judging the contest were fooled, of course – but not because of the program’s intelligence, but rather due to its programmed stupidity.

Typos or no, programmers looking to win such contests have long realized that in order to win, their AI programs couldn’t appear to be too smart, or people would obviously think they were interacting with a machine. So the coders would intentionally dumb down the output in hopes of a more convincing human simulacrum.

The question we’re facing today is whether there are any true business contexts for AI where we really want to dumb our programs down in order to make them sufficiently human-like to fool people.

Taking as a given that killer robots are still well in the future, the obvious answer is that today’s AI is barely smart enough as it is – let alone if we ever decided we needed to make it stupider.

On the contrary, researchers and vendors alike are actively innovating ever smarter AI – but not smarter in the sense of better able to fool people.

After all, there are other characteristics of human intelligence that we’re actively pursuing in our AI advancement, including better understanding of human language, judgment sufficient to make business decisions, and the most elusive goal of all: simple common sense.

In some cases, today’s AI can actually exceed human ability. Google Translate, for example, cannot yet match the ability of human translators for any pair of languages, but its remarkable capacity to translate between any of 103 languages exceeds any individual human translator’s ability.

AI-based judgment also generally falls short of human instincts, except when it comes to making judgment calls based upon vast quantities of data. To be sure, the overlap between AI and Big Data exceeds the human ability to leverage large data sets in our all-too-small brains.

Common sense, of course, is one area where humans still run circles around the best AI available today. Stories like the driverless car that broadsided the white semi because it couldn’t tell it wasn’t the sky all too frequently remind us of this limitation.

The good news: several vendors are pushing the limits of how much common sense AI can exhibit. Mark my words: when the technology advances to the point that an AI’s common sense is better than a human’s, we will all breathe more easily.

The Intellyx Take

In addition to human qualities we would like AI to exhibit and eventually excel at, perhaps more powerful are qualities we poor humans might wish to have for ourselves, but do not.

Rapid, tireless processing of data, of course, is one area that computers vastly exceed human capabilities for many years now – and with the addition of AI, that lead will only grow over time.

Other examples of AI excelling where humans are week are less obvious. In today’s fake news-infested world, for example, bias-free reasoning is a capability our AI may gain in spite of the fact that humans are inevitably biased in our thinking.

Then again, we may not appreciate unbiased programs, as they are likely to disagree with our own biased perception of the truth. Such is the nature of human bias.

A third area – and perhaps the most controversial – is AI’s potential ability to make itself smarter. True, as humans, we can educate ourselves, making us more knowledgeable and perhaps with experience, even wiser. But in terms of sheer smarts, we’re pretty much stuck with what we’re born with.

AI, however, faces no such biological limitations. Research progresses on programs that are smart enough to write other programs – and it’s only a matter of time until we have code that can write code-writing software.

Somewhere down this road lie the killer robots of Elon Musk’s worst nightmares, to be sure. In my opinion, however, there will be a significant interval of years or even decades between the current state of the art and Skynet – perhaps even a coming ‘golden age’ of AI, where our world will experience transformative benefit from AI that behaves little or nothing like humans.

On a final note, AI that excels at tasks humans are poor at, rather than AI that mimics human capabilities too closely, aren’t as likely to take many jobs away from people – and in those cases where AI does replace a human worker, there would be a thin argument for keeping that person in the job. And that’s no fooling.

Copyright © Intellyx LLC. Intellyx publishes the Agile Digital Transformation Roadmap poster, advises companies on their digital transformation initiatives, and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers.

SHARE THIS:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.