veda.ng
Back to Glossary

Hallucination

Hallucination is when an AI system generates plausible-sounding but completely false information. An LLM will confidently cite research papers that don't exist, invent facts about historical events, or generate fake citations. This happens because LLMs are pattern-matching systems, not knowledge databases with truth checks.

They learn from their training data that certain word sequences follow others. They develop statistical associations. When you ask a question, the model generates the most statistically likely continuation. Sometimes that continuation is factually correct. Sometimes it's plausible-sounding nonsense. The model has no internal mechanism to distinguish truth from falsehood.

It has no access to reality. It only has learned patterns. Hallucination becomes dangerous when deployed in contexts where accuracy matters such as medical advice, legal analysis, research assistance, and fact-checking. A hallucinated medical treatment recommendation could harm someone. A hallucinated case citation could appear authoritative.

This is why retrieval-augmented generation and careful prompt design matter. When you can ground the LLM's output in verified sources, hallucination decreases. When you ask the model to cite sources and verify them yourself, you catch hallucinations. Understanding that hallucination is not a bug but a feature of how LLMs work is essential for deployment.