GenerativeAI

How It Works

Hallucinations

The models underpinning GenAI are probabilistic, not deterministic. Being models, they produce answers without regard for whether the answers are true (just statistically likely). As explained in the release notes for ChatGPT on November 30, 2022, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers…there’s currently no source of truth.”

Recognition of the hallucination issue is critical enough to warrant a separate section. Hallucinations are a genuine challenge and key consideration. But it is also critical to understand that humans often hallucinate more than the models. For example, when asked to summarize a document, humans, on average, will invent more extrinsic information (i.e., not contained in the document) than the machines.*  

To repeat a warning: any analysis that concludes with the observation that the models are probabilistic, prone to hallucination, and, therefore, not to be relied upon is materially incomplete. First, it assumes that perfect accuracy is the sole standard. In reality, perfection is rarely the standard and those who presume persistent human infallibility are deluding themselves. Second, it assumes end users will only be interacting with the models in their raw form, completely ignoring the nascent but rapidly maturing application layer (next section).  

MENU