Can We Get Veracity From Ai? Ten Questions To Ask

Intellyx BrainBlog for Openstream.ai by Jason Bloomberg

From padded resumes to fictitious court cases to high school essays containing made-up history, it is now clear to everyone that ChatGPT and other generative AI technologies have a veracity problem.

Veracity – aka truthfulness – is clearly a priority in the business world. The only thing worse than inaccurate data is some AI bot telling you that you should believe those data.

Veracity is especially important for conversational AI – AI created for the express purpose of interacting with humans via a natural, back-and-forth conversation, either to answer humans’ questions or complete tasks as per their request.

Veracity, however, is surprisingly difficult to pin down, especially when we’re talking about AI. How should we approach the question of veracity from our AI-based applications? Here are ten questions about your AI you should have answers to.

VERACITY AND REASONING

Under the covers of most conversational AI offerings is generative AI technology. Generative AI vendors have optimized their output for plausibility rather than veracity.

Their technologies put phrases and sentences together based upon massive quantities of training data with no understanding of what the output means or why it reaches a particular conclusion.

Even when a prompt calls for ChatGPT to create a logical argument, the best it can do is appear to mimic human reasoning, coming up with a plausible but questionable facsimile of human thought.

Businesses require more than spurious reasoning, especially when the AI is conducting conversations with people. They need answers to the following questions:

  • Factual provenance – for the statements the AI takes as representing facts, how did it learn that those statements were in fact true?
  • Patterns of inference – when AI uses some kind of logic, what reasoning patterns did it follow to come up with a particular conclusion?
  • Probabilistic judgment – when the AI is making a judgment about the probability a statement is true, how did it come up with that probability?
  • Relevance of assumptions – if the AI makes assumptions as part of its reasoning, how did it conclude those assumptions were relevant to the argument at hand?
  • Based on the answers to these questions, either the developers of the AI or perhaps the businesspeople using it must be able to understand how the AI is coming to its conclusions.

Read the entire BrainBlog here.

SHARE THIS: