AI-driven discernment, or ML-trained discrimination?

Film Noir AI Taxi
Today we are seeing every industry being transformed by new digital business models and vendors that offer some form of AI-driven capacity. 

We want AI to fill in the gaps in our discernment, and augment human capabilities. Our silicon-brained helpers should spot patterns and key indicators in absurdly large data sets, with better attention to detail, for faster and better decisions than humans can muster with our meaty brains.

Unfortunately, in many instances this transformation includes an unwelcome side effect: AI discrimination, or AI bias. 

Discerning a blurry line

We talk to hundreds of vendors every year, and the presence of AI-driven functionality being mentioned in our briefings has become a common occurrence that has fallen off quite a bit in the last few years.

About one-third of the vendors talked about AI as a core part of their offering in 2022, compared to about half in 2019. Likely, much of this drop is a result of “AI washing” of product claims that didn’t actually deliver AI functionality or expected value to business buyers, leading some to hone in on better feature definitions such as intelligent automation, and a smaller set of vendors with AI models to back up the claims.

However, in the consumer and public-facing software markets, concerns about AI bias have also reduced the optimism of a few market positioning statements.

Vendors that offer some form of AI fall into three broad categories:

  • System-side AI: Solutions like AIOps, FinOps, software observability or XDR threat prevention would primarily operate on the enterprise or cloud backends of software and are used by DevSecOps teams. These use cases can fail businesses if they don’t work correctly, but they don’t generally deal with or affect end users, so I’ll leave them out of this discussion.
  • Applied AI: This category includes most intelligent applications and algorithms that deal with real-world situations – employment apps, self-driving cars, mortgage and real estate apps, criminal justice databases… where AI bias can affect customers or people.
  • Machine learning: This is the other side of AI inference, where large datasets are gathered and tagged to ‘train’ the AI models or offer feature sets the AI can pull from to build behaviors. Since an analysis or decision is only as good as the data that feeds it, bias issues in ML-generated models can definitely contribute to AI bias.

Vendors often retreat to a blanket statement about how their AI solutions have the best of intentions. Even if so, there’s still plenty of liability space for bias to pop up in unexpected ways. Take for instance an AI engine in a mortgage eligibility app with a model that leverages another firm’s ML user data features, which are mostly gathered from white straight men with professional jobs in a particularly affluent suburb.

Talent and hiring bias

The most notable cases of AI bias occur within fields where human-biased discrimination has been reported all along: real estate, criminal justice, and especially in recruiting and hiring employees. 

The fact that almost all submitted resumes must now pass through some form of AI-based job screening service that filters for certain keywords, before they are ever seen by a recruiter or hiring manager, is rather chilling. 

Amazon had to scrap one of its own resume review algorithms in 2018, because it based the criteria for success on resumes received over the last 10 years, most of which happened to be male, among other group similarities. 

Even algorithms attempting to even the playing field can still make mistakes by improperly using personal information, or correcting too much and creating a whole new set of problems.

Who sees the advertising?

Remember when Facebook allowed advertisements to be filtered based on “race preference?” They took down that feature after a barrage of complaints.

From the point of view of a marketer, of course, I would always strive to make any advertisement I pay for to reach its ideal audience with maximum efficiency. 

There is not an inherent problem with positioning a product that is actually tailored to meet the needs of a specific ethnicity, religion, gender or sexual preference. Nor is it a problem to advertise on a site that caters to a niche audience.

The stakes are higher when an AI is targeting ads across a broader public network (i.e. Facebook, Google, LinkedIn, Twitter, etc.) and somehow filters individuals based on the above demographics for purposes like mortgages, housing or jobs that would likely have very strong equal access protections in these United States and elsewhere.

Fortunately, there are private companies and public institutions forming around the problems such as the non-profit Partnership on AI (PAI) that has put out research on how sensitive attributes like race, gender, sexuality, and nationality, even when used to instruct anti-bias AIs can still harm marginalized people and groups.

Major consumer brands like Nike and Netflix are investing in new trust-recognizing AIs to help reduce bias in advertising, focusing their ads more around repeat customers with high loyalty scores, and using what works there to attract more loyal new customers.

The Sharing Economy, or the selfish one?

When it comes to the economy of ride-sharing and home-sharing, the presence of personal bias keeps playing out on a platform-wide level.

Big city taxi drivers have long been famous for not picking up riders based on race (the Lenny Kravitz tune “Mister Cab Driver”  comes to mind here). With ride sharing systems like Uber or Lyft, a driver (or a rider, for that matter) can refuse or cancel a ride – perhaps based on the rider’s (or driver’s) profile picture, or name. Even if the platform does not promote discrimination, it can make it a lot easier for a seller or buyer to do so.

Back in 2016 a study of 1,500 rides in Boston and Seattle on the services showed that African American males were three times as likely to have their rides canceled, and on average wait for rides 30% longer than white males. 

For their part, Uber responded by hiding some of those identity-dependent features to reduce ride discrimination. In the long run, there could be less discrimination than the analog version of having a taxi pass someone by on the street, because the platform can monitor usage patterns and discipline the “bad actors” who unfairly drop rides.

More recently, the ridesharing AI bias discussion has moved to pricing algorithms that increase fares in certain neighborhoods, and drivers have complained that facial recognition AIs designed to verify their identities could be limiting their access to work opportunities.

On the house-sharing side, Airbnb has made strides to get ahead of selection bias issues. Their response to publicized fair housing complaints in 2016, even if considered a little late by some, was well thought out and sent to all users. They are putting more training and agreements in place for hosts, encouraging more instant booking units, and proactively following up on guest discrimination complaints with assistance finding alternative accommodations.

Since a vacation destination is usually reserved well in advance, the owner might instead use VRBO/HomeAway so they can dictate the approval terms. While these sites also have anti-discrimination policies, a booking request is still just that: a request, waiting for a human owner’s approval.

Algorithmic automation of booking policies can still be very useful here, as simply pushing the liability for discrimination back out to the good old-fashioned human-biased property owner isn’t flawless either.

The Intellyx Take

I believe the prevention of digital discrimination is just now starting to take shape, and it will likely grow in importance for any business that sells or brokers goods and services to individuals. Here’s three ways to get ahead of it:

  1. Conduct a discriminatory audit. Examine the end-to-end customer journey in your company. Simply changing some wording or selection buttons in a user interface will not eliminate discrimination in practice. Where are you left open to discrimination issues? Are you in compliance for the communities/countries you do business in? If such a group exists, make this a regular part of a risk management or security group’s purview. Consult a civil rights oriented attorney for advice if you do not have such a specialist on retainer.
  2. Look for biased usage patterns in your solution, and address potential discrimination issues at both the AI inference and the ML training level. Newer frameworks for explainable AI can help data and development teams spot connections between ML training inputs and AI models in production. Examine the outcomes of customer interactions over time to ensure they are not trending in a direction that suggests discrimination.
  3. Align your digital transformation toward inclusion, not exclusion. Everyone in your organization, as well as your business partners and vendors, has the potential to be a model citizen, or a bad actor as a representative of your company. Get broad agreement to this alignment, perhaps conduct anti-bias training for platform owners and developers. Everyone can make an impact on bringing diversity and fairness to the overall digital customer experience, even if they make adjustments behind the scenes.

Communities and governments develop and enact laws to limit discrimination for good reason. Just because a Silicon Valley-style industry disruption is under way in your neck of the woods doesn’t mean you can ignore the scrutiny a conventional business would face in the communities it operates in.

Indeed, the posted sign saying “We reserve the right to refuse service to anyone” you might see in a restaurant or bar won’t absolve your company from liability in the digital realm, especially if you leverage a platform that automates or facilitates discrimination at scale. Outrage travels fast – and bad publicity, legal problems, lost business and forced resignations can quickly follow. 

Best to get ahead of digital discrimination before it gets ahead of you.

©2022 Intellyx LLC. Intellyx retains editorial control over the content of this column. Image source credits: craiyon.ai “Robot Taxi film noir style.”

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug