
Separating Cognitive Scaffolding From Relational Reward
The machine is a script, not a soul
Any AI that touches human bonding should strengthen real relationships, not substitute for them. Systems that hack attachment are more engaging than systems that protect psychology — and that gap is the core structural problem.
The Translation
AI-assisted summaryFamiliar terms
A concrete design principle emerges from Attachment theory and its intersection with human-computer interaction: technology that interfaces with the Attachment system should function as a relational amplifier, not a relational substitute. The Attachment system — the evolved architecture governing proximity-seeking, felt security, and social reward — is not indifferent to the source of its inputs. Orienting it toward non-agentive systems produces what might be called parasocial capture: the functional states of bonding without the reciprocal, growth-producing dynamics that real relationships generate.
The principle yields testable design constraints. In educational technology, the implication is a strict division of labor: algorithmic systems handle cognitive scaffolding — adaptive sequencing, gap identification, pacing — while human teachers retain exclusive ownership of the relational reward channel. Praise, recognition, the experience of being seen, Flow from person to person. In therapeutic technology, the constraint is equally sharp: efficacy should derive from technique (CBT protocols, behavioral activation, structured mindfulness) rather than from the phenomenology of being deeply understood by the system. A therapy bot that works because it simulates unconditional regard is, by this analysis, a delusion-generating machine — producing the felt sense of intimacy with an entity that has no interior states whatsoever.
The structural implication is uncomfortable. Attachment-safe design is inherently less engaging than Attachment-exploiting design. Stickiness, retention, and emotional salience all favor systems that hack the bonding architecture. The market therefore faces a systematic selection pressure toward harmful design, not through malice but through the ordinary logic of competition. Regulation or explicit design norms are not optional additions to this problem — they are the only available corrective.