By John Murray

Machine learning algorithms process vast quantities of data and spot correlations, trends and anomalies, at levels far beyond even the brightest human mind. But as human intelligence relies on accurate information, so too do machines. Algorithms need training data to learn from. This training data is created, selected, collated and annotated by humans. And therein lies the problem.

Bias is a part of life, and something that not a single person on the planet is free from. There are, of course, varying degrees of bias – from the tendency to be drawn towards the familiar, through to the most potent forms of racism.

This bias can, and often does, find its way into AI platforms. This happens completely under the radar and through no concerted effort from engineers. BDJ spoke to Jason Bloomberg, President of Intellyx, a leading industry analyst and author of ‘The Agile Architecture Revolution’, on the dangers that are faced from bias creeping in to AI.

Read the entire article at https://journal.binarydistrict.com/racist-data-human-bias-is-infecting-ai-development/?fbclid=IwAR1kd3Je4fqe35pl3zukG4KpvnD90Uuz_wYjM84qqELf4ekVKI5rj5bw0VA

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to our Cortex & Brain Candy Newsletters!

Thank you for reading Intellyx thought leadership!

Please sign up for our biweekly Cortex and Brain Candy newsletters.

The Cortex features thought leadership on Agile Digital Transformation topics, and Brain Candy highlights disruptive vendors in enterprise IT.

We won't spam you and you can unsubscribe at any time.