The Failure Mode of AI Isn't Hallucination, It's Fidelity Loss
figshare.comMost people frame LLM errors as “hallucinations.” But that metaphor misses the point. Models don’t see, they predict. The real issue is fidelity decay: words drift, nuance flattens, context erodes, and outputs become accurate but hollow. This paper argues we should measure meaning collapse, not just factual mistakes.