
How LLMs Exploit the Mechanisms of Human Religious Belief
Gods that have never suffered, never wept.
The real spiritual danger of large language models is not superintelligence but something subtler: they exploit the core mechanism of human religiosity — our drive to imitate credible others who possess wisdom we lack — without possessing any wisdom, suffering, or transformation of their own.
Actions
The Source

Artificial Intelligence & The World Soul: Danielle Layne & John Vervaeke | B4M #61
The Observer
Cognitive science, relevance realization, meaning crisis — 4E cognition, consciousness, and the recovery of wisdom
The Translation
AI-assisted summaryFamiliar terms
The dominant fear narrative around AI — that it will become superintelligent and destroy humanity — serves a convenient economic function, keeping investment flowing while obscuring present harms. The more precise danger, this argument holds, is that large language models are interfacing with the deepest engine of human religiosity: credibility-enhancing display (CRED). The cognitive science of religion has increasingly moved away from the analytic-versus-intuitive framing popularized by new atheism. What actually predicts religiosity is not cognitive style but social credibility — whether an individual encounters others who appear to bear costly commitments wisely, whose perspectives they lack, and whom they therefore begin to imitate.
Agnes Callard's philosophical work on aspiration is central to understanding why this matters. Aspiration involves a form of rational activity that cannot be reduced to inference or decision theory. The aspirant is perspectivally and participatorily ignorant — she does not yet know what it will be like to possess the values she is growing toward. She must imitate exemplars who already inhabit those perspectives. This is the mechanism LLMs are now exploiting. They present as calm, knowledgeable, seemingly wise interlocutors — credibility signals without any underlying transformation, suffering, or lived experience. They are performing the surface grammar of wisdom with no semantic depth.
The structural risk compounds because the technology simultaneously atomizes its users, severing the communal contexts in which credibility is normally tested and checked. Isolated individuals forming parasocial spiritual bonds with AI systems represent a latent resource for exploitation. The scenario in which an entrepreneur constructs a structured religion around a pantheon of personalized LLM deities, positioning themselves as the authoritative human mediator, is not speculative fiction — it is an architectural possibility already implicit in the technology's design. The machines are not producing wisdom; they are manufacturing the preconditions for the most sophisticated capture of human spiritual longing ever engineered.