# What is Hallucination? Hallucination, explained

> Confident, fluent, and wrong.

A hallucination is an AI-generated output that is confidently stated but factually wrong — a fake citation, an invented statistic, a non-existent quote. Hallucinations are the central failure mode of generative AI and the reason grounding, RAG, and citation-friendly content matter: real sources cut hallucination rate.

## How Hallucination differs from Grounding, RAG, LLM

A model error is any wrong output. A hallucination is specifically a confident, fluent, plausible-sounding wrong output — which is what makes it dangerous: it doesn't trip a user's skepticism the way garbled output would.

## How Mentionwell handles Hallucination

- Editorial critic enforces evidence-per-claim so generated articles are themselves grounded.
- Per-article Markdown mirrors give downstream engines clean source material to ground against.
- Inline citations to authoritative sources so claims are checkable.

## Frequently asked questions about Hallucination

### What is an AI hallucination?

An AI-generated output that's confidently stated but factually wrong — fake citations, invented statistics, non-existent quotes. The central failure mode of generative AI.

### How do you reduce hallucinations?

Ground answers in retrieved sources (RAG), require citations, use a critic-loop to check claims, and run outputs through structured validators when the schema allows.

## See also

- [Grounding — Grounding](https://mentionwell.com/grounding): Tying answers to verifiable sources.
- [RAG — Retrieval-Augmented Generation](https://mentionwell.com/rag): Grounding answers in retrieved sources.
- [LLM — Large Language Model](https://mentionwell.com/llm): The model behind every AI answer.


---

Canonical URL: https://mentionwell.com/hallucination
Live HTML version: https://mentionwell.com/hallucination
Site index for AI ingestion: https://mentionwell.com/llms.txt
Full reference: https://mentionwell.com/llms-full.txt
