Explaining generative AI accuracy

Explaining Generative AI (GAI) to people and how it “magically” produces content/output is sometimes difficult to do in a concise, understandable way, but the best way I’ve figured out how to do it is to communicate that everything produced by GAI is “made up”. Here’s a quick example of how I explain it.

Hallucinating #

GAI, like large language models (LLMs), often produces outputs that may seem true at first glance, but can be completely made up. This is known as “hallucinating”.

Hallucinations happen when the AI generates output that isn’t based on true information, which is a result of how it predicts the next word based on patterns learned during training. For example, an AI might describe a historical event that never happened or invent a scientific fact. GAI can make things up, merging truth with fiction in ways that aren’t always clear to the person reading the GAI output.

It’s all made up #

Ultimately, everything produced by GAI is made up, since the model doesn’t “know” facts like a human would know facts. Instead, GAI composes responses from statistical correlations in its training data.

Unlike traditional databases that retrieve stored information, GAI models create new content on-the-fly, putting together different elements to form narratives or images. This means even seemingly-factual responses are creations based on the query, with information sourced from big but not-always-accurate sources.

Everything produced by GAI is made up based on a data set, where the model hallucinates plausible continuations without a true understanding of truth or falsehood.