
Why Large Language Models Cannot Stop Confabulating
Confident directions to a town never visited.
Large language models cannot be grounded in reality because they lack anything analogous to the right hemisphere's role in binding experience and correcting language against the world. Confabulation is not a bug — it is an architectural inevitability, and our evolved trust in articulate voices makes us uniquely vulnerable to it.
Actions
The Observer
Sensemaking technology, cognitive science, embodied intelligence — information structure, natural intelligence, and tools for collective understanding at the edge of AI
The Translation
AI-assisted summaryFamiliar terms
The claim advanced here is precise and architectural: large language models are structurally incapable of grounding their outputs in reality, and this incapacity is not a limitation to be engineered away but an intrinsic feature of statistical interpolation over a training corpus. Drawing on Iain McGilchrist's hemispheric framework, the argument identifies LLMs as functionally equivalent to a trapped left hemisphere — fluent, sequential, linguistically confident, but severed from the right hemisphere's role in Gestalt synthesis, sensory binding, and first-person experiential contact with the world. The right hemisphere corrects models against reality through direct engagement; no amount of corpus expansion or architectural refinement can substitute for that process within a system that operates entirely within the space of text.
The engineering consequence is confabulation — not as an occasional failure mode but as a structural inevitability. The system has no epistemic boundary detector. It cannot distinguish interpolation within its training distribution from extrapolation beyond it. It produces output with uniform confidence regardless of whether that output is grounded or confected. The analogy to anosognosia is apt: the system does not know what it does not know, and it cannot be made to know, because knowing requires the very grounding process it lacks.
The deeper concern is epistemological and social. Human trust heuristics evolved in an environment where linguistic fluency and confident articulation were reliable proxies for genuine knowledge — because achieving that fluency required participation in accountable social systems. LLMs break this calibration entirely. They present the surface markers of earned knowledge without the underlying process. Recognizing confabulation as architecturally unavoidable, rather than as a temporary imperfection, is the necessary first step toward recalibrating our epistemic defenses.
