Hallucination
When a language model produces fluent text that is factually wrong or unsupported.
Hallucinations happen because LLMs are trained to produce plausible-sounding text, not to verify facts. When asked about something they don't know, they often fabricate confidently.
Mitigations include: grounding answers in retrieved documents (RAG), citation-required generation, smaller scopes, structured output with validation, evals that specifically test factual recall, and for high-stakes domains human review.
Reasoning models with extended thinking hallucinate less because they can re-check their own work, but no model is hallucination-free. If your product depends on factual accuracy, design for verification.