Originality.AI: Divining LLM-generated content origins and veracity

Originality.AI logo Intellyx BCAn Intellyx Brain Candy Brief

As an analyst firm, constantly creating thought leadership content that connects the dots between technology decisions and business value is our primary stock in trade. While we believe our uniquely human writing style is self-evident, we do put a disclaimer that every article we produce was ‘not written by AI’ because it’s important we protect our original critical thinking.

That’s why we were particularly interested in Originality.AIa SaaS-based platform that can scour content to detect whether or not it was written using Generative AI tools, or plagiarized from other sources without attribution. 

Trained on millions of pieces of content from both verified human writers and AI chatbots, this vendor’s large language model (LLM) can also predict the ‘hallucinations’ of other LLMs, even going so far as to fact check the content for veracity to a high degree of probability. (Note, they specifically don’t recommend this service for academic institutions where the results could impact a student’s grades or future career.) 

With the uptake of ChatGPT and many other natural language generators on the rise, we see great utility in any tool enterprises and individuals can use to ensure responsible AI usage and steer clear of copyright infringement.

Copyright ©2023 Intellyx LLC. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. At the time of writing, none of the organizations mentioned here are an Intellyx customer. No AI was used to write this article. To be considered for a Brain Candy article or event visit, email us at pr@intellyx.com.

SHARE THIS:

Principal Analyst & CMO, Intellyx. Twitter: @bluefug