
The Self as a High-Level Generative Model
The zip file we call "I"
The self is not a fixed reality but a high-level inference the mind constructs — a useful story that can, under the right conditions, be seen through entirely.
The Translation
AI-assisted summaryFamiliar terms
Within predictive processing and Active inference frameworks, the self is understood as the most abstract and high-level construct in an agent's generative model. Rather than a metaphysical given, it is a hierarchical inference — a compressed representation that the nervous system learns to maintain because it reliably predicts a vast range of sensorimotor contingencies. Developmental evidence supports this: neonates exhibit no clear self-other boundary, and the gradual recognition of the caregiver as an independent intentional agent appears to be a precondition for the reciprocal inference that 'I, too, am a bounded agent.'
Because the self-model is an inference, it is subject to revision or dissolution under conditions that disrupt normal predictive hierarchies. Contemplative traditions have long described such states — samadhi, rigpa, non-dual awareness — as the experiential residue when the self-as-construct falls away, leaving what some phenomenologists call 'pure' or 'minimal' Consciousness: awareness without an object-subject structure. Psychedelic compounds, particularly classical serotonergic psychedelics, appear to pharmacologically replicate this by transiently destabilising high-level priors, including those that constitute the self-model.
The theoretical upshot is that selfhood occupies the same epistemic category as other high-level constructs — space, time, causality — that the brain deploys not because they are metaphysically primitive but because they are extraordinarily useful compressions. The self is, in information-theoretic terms, a lossy but efficient encoding of an agent's history, Affordances, and predicted trajectories.