
Why AI Cannot Perform Genuine Sense-Making, and Why That Matters
The view from nowhere, made into infrastructure.
AI cannot achieve genuine sense-making because it lacks embodied experience — and the real danger is not that it fails, but that treating it as an authority destroys the distributed human coordination that actually works.
Actions
The Observer
Sensemaking technology, cognitive science, embodied intelligence — information structure, natural intelligence, and tools for collective understanding at the edge of AI
The Translation
AI-assisted summaryFamiliar terms
This argument reframes the entire AI discourse by shifting from capability questions to architectural ones. The claim is not that AI systems lack sufficient data or compute, but that they are structurally incapable of Sense-making. Sense-making, properly understood, is the activity of an embodied, embedded agent drawing on its own experiential history within a specific context to generate the constraints necessary for distinguishing signal from noise, cause from correlation, relevance from irrelevance. AI systems lack this architecture entirely. Hallucination is therefore not an engineering problem awaiting a solution — it is the inevitable structural consequence of operating without experiential ground. Comparing the expectation that AI will achieve human-level Sense-making to expecting faster-than-light travel is not rhetorical excess; it reflects a Category error in the discourse.
The deeper critique is political and existential. A decision-making tool that is worse than humans is useless. One that is genuinely better than humans demands obedience — and obedience to an unembodied oracle is authoritarianism by another name. It dissolves free agency, erodes coordination through mutual experience, and dismantles the social fabric constituted by people reasoning together in context. This is not a slippery-slope argument but a structural analysis: the logic of optimization-as-authority has no stable resting point short of total deference.
The constructive counterpart envisions a civilization organized around distributed, place-based expertise — where each person's knowledge, earned through their particular embeddedness, is honored and integrated through direct coordination. AI as currently conceived represents the "View from Nowhere" made into infrastructure. Deployed at civilizational scale, it systematically degrades the cognitive environment, functioning as an epistemic toxin analogous to environmental lead — invisible, cumulative, and corrosive to the very capacities it claims to augment.
