TripleBlind: Protecting Privacy and Reducing Bias in AI Training Data

An Intellyx Brain Candy Brief

TripleBlind offers privacy-enhancing computation for data sets intended to train AI models. Its technology is similar to homomorphic encryption, insofar as the model training succeeds without having to access the private information that TripleBlind obscures in the training data.

TripleBlind can also identify when there is insufficient diversity in training sets, which can lead to biased results. It is thus able to address bias challenges while still respecting the confidentiality of the training data.

TripleBlind’s primary use cases are HIPAA and GDPR compliance. For example, while most HIPAA compliance technologies for unstructured data like X-rays or EKG outputs focus on obfuscating the metadata, TripleBlind obscures the images themselves, while still allowing the rich media data sets to accurately train diagnostic AI models.

TripleBlind has also found a niche in anti-money laundering and anti-fraud use cases, obscuring personally identifiable information while supporting the respective AI models.

Copyright © Intellyx LLC. Intellyx publishes the Cloud-Native Computing poster, advises companies on their digital transformation initiatives, and helps vendors communicate their agility stories. As of the time of writing, none of the organizations mentioned in this article is an Intellyx customer. To be considered for a Brain Candy article, email us at pr@intellyx.com.

SHARE THIS: