
Why Large Language Models Are Not Genuinely Intelligent
Brilliant mirrors that care about nothing
Large language models are not a scientific advance in understanding intelligence — they are extraordinarily powerful condensations of human collective intelligence that lack genuine care, ontological grounding, and the correlated cross-domain competence that defines general intelligence, making them dangerous precisely because their power is disconnected from wisdom.
Actions
The Observer
Cognitive science, relevance realization, meaning crisis — 4E cognition, consciousness, and the recovery of wisdom
The Translation
AI-assisted summaryFamiliar terms
This argument, rooted in John Vervaeke's framework of Relevance realization, contends that large language models do not constitute a scientific advance in understanding intelligence. They confirm that the compression-variation dynamic central to Relevance realization produces powerful outputs when given sufficient computational scale, but this confirmation simultaneously specifies what is missing. The predictive processing LLMs perform is narrowly specialized — probabilistic prediction over linguistic token relationships — and the generality of language itself creates a misleading appearance of general intelligence.
More fundamentally, whatever Relevance realization these systems perform lacks Ontological Grounding. Genuine Relevance realization requires an autopoietic being — one that maintains its own existence and therefore has something genuinely at stake. Without care in this deep biological sense, there is no authentic intelligence. Additionally, these systems are parasitic on human Relevance realization: the training data, the structure of the internet, and the human feedback loops within the learning process all mean that LLMs are condensations of distributed collective human intelligence rather than independently intelligent agents.
The structural signature of this absence is measurable through cross-domain correlation. General intelligence predicts that performance in one domain strongly correlates with performance in others. LLMs scoring in the top percentile on standardized law exams while producing mediocre philosophical analysis reveals not a fixable limitation but a fundamental architectural absence. The profound irony is that these machines are themselves the greatest empirical refutation of the reduction of reason to propositional and computational competence — they maximize computational performance while remaining incapable of genuine rationality, wisdom, or moral agency. The danger lies precisely in this combination: extraordinary power with no corresponding advance in understanding intelligence or its connection to wisdom.
