Hallucination is when an AI system generates plausible-sounding but completely false information. An LLM will confidently cite research papers that don't exist, invent facts about historical events, or generate fake citations. Because LLMs are pattern-matching systems, not knowledge databases with truth checks. They learn from their training data that certain word sequences follow others.
They develop statistical associations. When you ask a question, the model generates the most statistically likely continuation. Sometimes that continuation is factually correct. Sometimes it's plausible-sounding nonsense. The model has no internal mechanism to distinguish truth from falsehood.
Hallucination becomes dangerous when deployed in contexts where accuracy matters such as medical advice, legal analysis, research assistance, and fact-checking. A hallucinated medical treatment recommendation could harm someone. A hallucinated case citation could appear authoritative. So, retrieval-augmented generation and careful prompt design matter.
When you can ground the LLM's output in verified sources, hallucination decreases. When you ask the model to cite sources and verify them yourself, you catch hallucinations.
Interactive Visualizer
AI Hallucination Explorer
See how AI generates convincing but false information through pattern matching