Guardrails AI: Safety and Reliability for Gen AI

An Intellyx Brain Candy Brief

Guardrails AI develops an open source orchestration engine and sponsors an open source community of plug-in validators to improve the reliability and safety of generative AI applications.

The orchestration engine integrates with logging and observability tools, scans prompts before they are submitted, and invokes a set of configured validator plug-ins to check for risk in responses. 

The orchestration engine accepts a chat prompt, checks it for PII, off topic content, and jailbreak code, and submits the prompt to an LLM API.

The set of configured plug-ins validate the response to check for instances of PII, mentions of competitors, profanity, toxic language, and other such risks. 

The plug-in validator library contains about 50 validators, and is open for community contribution. Organizations can execute the validators outside of the orchestration engine if they want to submit prompts directly.    

Guardrails AI aims to take the risk out of generative AI so that organizations can feel safe offering AI bots for customer use, for example. 

Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the organizations mentioned in this article is an Intellyx customer. No AI was used to produce this article. To be considered for a Brain Candy article, email us at pr@intellyx.com.

 

SHARE THIS: