Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to Glossary
Generative AI Beginner Also: Confabulation, AI Fabrication, Model Confabulation

Hallucination

Definition

When an AI model generates output that sounds confident and plausible but is factually incorrect, fabricated, or unsupported by its training data — a fundamental challenge in deploying large language models.

In Depth

Hallucination in AI refers to instances where a model — particularly a large language model — generates statements that are fluent, confident, and plausible-sounding, but factually wrong or entirely fabricated. A model might cite a research paper that does not exist, attribute a quote to someone who never said it, or confidently describe historical events that never happened. The term 'hallucination' reflects that the model is producing output from its internal patterns rather than retrieving verified facts — it is pattern-completing, not fact-checking.

Hallucinations occur because language models are trained to predict probable next tokens, not to verify truth. They have no internal database of facts that they look up — instead, they have learned statistical patterns across billions of text examples. When a query falls in an area where the model's training data is sparse, contradictory, or ambiguous, the model may generate plausible-sounding but incorrect 'gap-filling' output. The problem is especially acute for rare topics, recent events not in training data, precise numerical claims, and any context requiring exact citation or quotation.

Mitigating hallucination is one of the most active areas of AI safety research. Strategies include Retrieval-Augmented Generation (RAG), where models are connected to external knowledge bases and instructed to ground responses in retrieved documents. RLHF (Reinforcement Learning from Human Feedback) trains models to express uncertainty rather than fabricate. Chain-of-thought prompting encourages step-by-step reasoning that can surface logical errors. Despite these advances, hallucination remains an unsolved problem — it is an inherent consequence of how generative models work, and users must exercise critical judgment with AI outputs.

Key Takeaway

Hallucinations are confident but false outputs from AI models — they occur because language models predict probable text rather than verify truth, making critical evaluation of AI outputs essential.

Real-World Applications

01 Legal risk: lawyers have submitted AI-generated court filings containing fabricated case citations that hallucinated models presented with full confidence.
02 Medical misinformation: health-related chatbots may generate plausible but incorrect medical advice, posing patient safety risks.
03 News and journalism: AI-generated articles may contain fabricated quotes, statistics, or events that erode public trust if published without verification.
04 Academic research: students and researchers using AI for literature reviews may encounter fabricated paper titles, authors, or findings.
05 Customer support: chatbots may promise policies, procedures, or solutions that do not exist, creating liability for organizations deploying them.