Why Overcoming Algorithm Aversion is the Key to Unleashing the Disruptive Force of AI

One of the things I love about being a keynote speaker is that it allows me to travel around the world and talk to both enterprise leaders and IT practitioners about their day-to-day realities.

As I do, it’s always fascinating to observe the gap between the aspirational ideas that we write about and how things are playing out in the real world. And one of the most significant gaps I hear about today is the one between the hype and reality of artificial intelligence (AI).

While there is plenty of bluster and bravado kicked around about how AI will transform everything in the enterprise, the reality on the ground is much more sober. This gap between talk and truth has left me wondering: is this just the all-too-familiar story of the hype outrunning reality?

Sweating through a stair-stepper workout the other day, I stumbled across another possible explanation.

As I marched up the make-believe mountain I was supposedly climbing, I listened as Stephen J. Dubner explained on an episode of the podcast Freakanomics Radio why projects are always late. As he stepped through his hypothesis, one of his guests introduced me to what I believe may be a significant cause of this gap: algorithm aversion.

The Planning Fallacy, Optimism Bias, and Operational Chaos

Before I explain exactly what algorithm aversion is and how it may be negatively impacting the actual adoption of AI, it’s worth diving into what led Dubner and his guest to introduce the concept and to understand its relevance to the challenges facing enterprise leaders.

The core question that the podcast was exploring was the reason why major projects were always completed behind schedule and over budget. As Dubner uncovered, the two primary causes are what his guests described as the planning fallacy and optimism bias.

The planning fallacy is the “tendency to underestimate the time it will take to complete a project while knowing that similar projects have typically taken longer in the past. So it’s a combination of optimistic prediction about a particular case in the face of more general knowledge that would suggest otherwise.”

Likewise, the optimism bias is just what it sounds like: a bias toward “seeing the future in rosy terms.”

According to Dubner and his guests, these two prevalent human conditions lead us to naturally overestimate our abilities, discount potential problems, and underestimate the difficulties in overcoming those challenges. The net result is projects that consistently take longer and more money to complete than we expect.

As I listened to the podcast, the little lightbulb in my head started to flicker. The impact of these two tendencies was much higher than just missed project timelines and budgets. They actually impact nearly everything we do when it comes to deploying and managing technology.

There is not a single IT leader or practitioner that I have spoken with in the last several years that didn’t agree that the technology stack is becoming exponentially more complex and challenging to manage. So I’ve often been left scratching my head at the relatively slow adoption of modern technologies that would help overcome this management gap.

I believe that it is these same two conditions kicking in. Even though we can all see the growing complexity, the planning fallacy and an optimism bias fool us into thinking that it is somehow still a manageable situation. The result, as we are now seeing play out with increasing regularity, is a growing level of operational chaos as enterprise leaders consistently underestimate the challenges and overestimate their ability to handle them.

But is there a way out of this bind?

AI to the Rescue — or Not

According to the industry press and tech community, the answer is pretty straightforward: apply AI.

Managing the complexity challenge is essentially a math problem. Or, more accurately, it’s a data problem. As the technology stack becomes more complex, the answer — the only answer, in my opinion — is to let machines do what they’re best at and sort through all of the operational data enterprises now create to identify the patterns that humans cannot find or cannot find fast enough.

Virtually everyone agrees that this is the pathway forward. So then why the gap between all the talk about AI and the reality on the ground?

There are a few issues. First, of course, is the need for clean, contextual data. This need is a real challenge, but also an imminently solvable one.

Second, is that the technology itself is still evolving. There are unquestionably high levels of hype as tech companies over promise and under-deliver when it comes to AI-based solutions (not to mention the fact that AI is a broad category of technologies rather than one specific thing). Nevertheless, brush aside all the hype and hyperbole, and there remains very real AI-driven technologies that can deliver immediate and meaningful results for organizations.

So what’s the real culprit causing this gap between AI potential and reality? I believe it’s algorithm aversion.

As Katherine Milkman, professor at the Wharton School of the University of Pennsylvania, explained on the podcast, algorithms are the answer to overcoming the complexity challenges, the planning fallacy, and optimism bias — but people are averse to using them for “all sorts of reasons that…are a little crazy.”

I believe that this is at the heart of the relatively slow adoption of AI-based technologies that we are seeing in the enterprise: there is an underlying fear that AI and algorithms can’t be trusted and that it is, therefore, better to trust human judgment.

The proof in the pudding, so to speak, is the number of tech companies that stop short of any actual algorithmic or AI-driven automation. The objective of these systems, instead, is to provide options and insights to human operators, who must then make the decisions and take the actions.

The problem is that in most cases, turning it back over to a human simply reintroduces the planning fallacy and optimism bias that resulted in operational chaos and bad decision making in the first place.

The Intellyx Take: Overcome the Emotional and Cultural Issues First

I will be the first to admit that we are still in the very early days of adopting AI in earnest within the enterprise. This incredibly powerful class of technology still has a long way to go before I would trust it with life-or-death or truly mission-critical situations.

And it is also true that there are still some technological hills to climb when it comes to full-fledged, at-scale adoption of these technologies, most notably in organizations’ ability to harvest, catalog, control, and maintain a fast-moving data stream.

This situation, however, is evolving quickly as tech companies work to improve the technologies themselves and our ability to leverage them.

Those temporary challenges will not be what stops most enterprises from leveraging this technology to transform their organizations — as they must — for the digital era.

Instead, it will be the much more mundane, yet much more powerful emotional and cultural issues contained within this idea of algorithm aversion.

While enterprise leaders must continually push the boundaries of these technologies to create disruptive opportunities now and going forward, if they want to lead their organizations into the future, they should be putting more energy into overcoming these emotional and cultural barriers. These barriers represent the much more significant threat to their survival — and, if overcome, are the key to unleashing the disruptive force of AI.

Copyright © Intellyx LLC. Intellyx publishes the Agile Digital Transformation Roadmap poster, advises companies on their digital transformation initiatives, and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article are Intellyx customers. Image source: Gerd Altmann via Pixabay.

SHARE THIS: