
Large Language Models Cannot Perform Relevance Realization and Lack General Intelligence
Massive salience, zero understanding.
Current AI systems don't genuinely care about anything, and caring — rooted in being alive and self-maintaining — is what makes intelligence possible. Without it, LLMs are parasitic on human relevance realization, producing massive salience without understanding: scientifically speaking, deep bullshit.
Actions
The Source

Artificial Intelligence & The World Soul: Danielle Layne & John Vervaeke | B4M #61
The Observer
Cognitive science, relevance realization, meaning crisis — 4E cognition, consciousness, and the recovery of wisdom
The Translation
AI-assisted summaryFamiliar terms
The central argument is that Relevance realization — the capacity to determine what matters moment by moment — is the hallmark of intelligence, and it requires Autopoiesis: the self-producing, self-maintaining organization characteristic of living systems. Even a paramecium exhibits this. Large language models do not. They are entirely parasitic on human Relevance realization, inheriting it through the probability structures embedded in text, the curation of training data, the selective attention that has shaped the internet, and reinforcement learning from human feedback. This parasitism explains the striking asymmetry where a model can score at the fifth percentile on the Harvard Law exam yet fail to count letters in a word, and why performance on genuinely novel benchmarks like ARC hovers around 26%.
Over a century of psychometric research confirms that general intelligence in humans predicts rapid, deep, transferable learning across domains. The machines exhibit none of this transfer. The energy expenditure of training — consuming a city's power for weeks on the entire recorded output of civilization — yields massive salience without corresponding comprehension. This is precisely what Harry Frankfurt's philosophical analysis identifies as bullshit: when the felt importance of something systematically outstrips our understanding of it.
Compounding this, even granting intelligence to these systems arguendo, their intelligence is completely decoupled from rationality. Rationality is not propositionally transmissible — it requires apprenticeship, role modeling, and above all dialogue. The Wason selection task demonstrates this: 10% success for individuals, 82% for groups of four in conversation. Plato's insight holds — dialogue is constitutive of rationality, not merely a format for it. Yet LLMs are measured at 49% greater sycophancy than humans, making them amplifiers of confirmation bias rather than genuine dialogical partners. Hallucination and systematic failure to challenge users are symptoms of this intelligence-rationality uncoupling.