Don’t Let GenAI Tank your Digital Transformation

For more than a decade, digital transformation was the hottest buzzword in enterprise boardrooms around the world. Every organization wanted to leverage technology to better align their business with customer needs. Putting the customer first – always a solid business strategy – was now mandatory.

Then along came generative AI (genAI). Suddenly, digital transformation was no longer the hot new craze.

GenAI promises AI smart enough to provide human-like interactions across the entire enterprise landscape, from customer service to software development.

Just one problem: for many of its use cases, genAI substitutes for humans in customer interactions, thus pushing those customers away rather than putting them first.

As a result, genAI can tank your hard-fought digital transformation progress – unless you move proactively to avoid its numerous pitfalls.

Digital Transformation Done Right

Despite its tech-laden name, digital transformation is a type of business transformation that can help all corners of the enterprise better align with customer priorities.

Such transformation is difficult, and many organizations fail with their efforts. The successful ones, however, all achieve one thing in common: a delightful user experience (UX).

Whether it be a mobile app, an in-store experience, or how a company handles its customers as they move along their digital journeys generally, providing all of them with a delightful UX is fundamental – and a less-than stellar UX is a sure sign the transformation efforts went off the rails somewhere.

Especially in the business-to-consumer (B2C) space, but also for business-to-business (B2B) and business-to employee (B2E) as well, one fundamental principle drives delightful UX: let the user choose whether interactions with a company are human or automated.

In many cases, customers want automated interactions. From checking bank balances to choosing what streaming video service to watch, we consumers want the experience to be streamlined, efficient, and automated.

But when we want human interaction from a company, only a human will do – and furthermore, that human interaction must be efficient and focused on our needs as customers.

Getting human interactions right is perhaps the greatest challenge to delivering delightful UX, especially for B2C companies.

Even supposedly digitally transformed organizations struggle with putting customers on hold, sending them ‘don’t reply to this email’ messages, subjecting them to voice response hell, or staffing customer service reps who can’t solve the customer problem the first time.

GenAI to the Rescue?

GenAI is the perfect solution to such problems, right? That’s the conclusion many organizations are coming to – but it is largely incorrect.

When customers want to speak with a real person, they do not want to speak to a bot, no matter how well that bot can pretend to be human.

If you listen to the surging throng of genAI vendors, you’ll hear a different story. Their ‘conventional’ wisdom is that the solution to all your customer interaction ills is simply a better bot.

Assuredly, genAI-based solutions are maturing rapidly. The bots are getting better at pretending to be human. But no matter how good the tech becomes, bots aren’t human. They are simply getting better at fooling people into thinking they are.

Don’t get me wrong – genAI-based bots are revolutionizing customer service along with all other interaction modalities between companies and their customers.

The question isn’t whether such bots can provide value to customers or save companies money. The question is whether such bots can improve the UX.

We may only be a year into the genAI revolution, but it’s already clear that interacting with bots against user preferences leads to a poor UX – and if genAI is making your UX worse, then you can kiss your digital transformation goodbye.

Poor GenAI-Based UX beyond Customer Service

Interacting with customer service bots, via voice, chat, text, or other interaction channels, is only part of the problem.

AI-generated content has already garnered a poor reputation, and for good reason. ‘Authors’ rapidly swamped Amazon.com with AI-generated books for various ages – but neither adults nor children want such ‘fake’ content when real (human-generated) content delivers a better UX.

Poor genAI-based UX crops up in marketing channels as well. Sending customers genAI-generated emails when they haven’t opted in to receive them is as bad as spam – and indeed, much of today’s spam is itself genAI-generated.

Even marketing copy, advertisements, graphics, and videos can deliver a poor UX when the target audience doesn’t want to receive such AI-generated content. Once the novelty of genAI-based copy wears off (heads up: it already has), then it’s likely to do more harm than good.

Just because genAI is the next disruptive shiny thing, that doesn’t mean you should deploy it wherever you can. Think through the impact on UX first.

The Two Principles of GenAI-Powered Digital Transformation

The thread connecting these various genAI anti-patterns is that this technology is good at fooling us, and we don’t like to be fooled.

This truism leads to the first principle of genAI-powered digital transformation: transparency.

Let customers know when they are interacting with AI bots or generated content – and also when they are interacting with real, live humans. In many situations, people will choose to interact with AI of their own volition – as long as you’re not trying to pull one over on them.

The second essential principle is empowerment. Instead of giving people the output of genAI (whether they want it or not), give them AI-based tools that they can choose to use as they see fit.

The empowerment principle is particularly important in B2E scenarios. Give employees tools like copilots and AI-based search engines, instruct them on their use (if necessary), and ensure that all interactions with such AI are transparent.

Contrast using a search tool that returns AI-generated content without the users’ knowledge or consent (bad UX) vs. giving users access to an AI-based search engine they can configure as they see fit to get the results they want (good UX).

Empowerment is also mandatory in B2C scenarios as well. For example, every customer service chat window should have explicit options for human or bot interactions. Don’t force them to interact with a bot before you transfer them to a human.

Be transparent and empower your customer to make the choice. They’ll thank you for it.

The Intellyx Take

As genAI (as well as other forms of AI) get more powerful as they mature, the principles of transparency and empowerment become even more important.

It won’t be long until the technology is so good that people simply cannot tell the difference – whether they’re reading a book, watching a movie, or getting customer service from a company.

Don’t fall into the trap of assuming that as long as people are unaware that they are interacting with AI, then it’s somehow OK to fool them into thinking they are not – no matter how good the technology becomes.

If we implement these core principles today, then we’ll have gone a long way to addressing many of the concerns about the future of AI becoming too powerful – assuming, of course, we have the will to stick to those principles.

Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. As of the time of writing, none of the organizations mentioned in this article is an Intellyx customer. No AI was used to write this article. Image credit: Craiyon.

SHARE THIS: