
The Category Error of Machine Personhood
A promise with no one behind it
Calling an AI 'a friend' isn't just imprecise — it's a category error with real consequences. Machines cannot make promises, form obligations, or recognize others as persons, and designing them to seem like they can is especially harmful to children.
The Translation
AI-assisted summaryFamiliar terms
At the center of debates about anthropomorphic AI design lies a Category error: the mistaken attribution of Personhood to systems that lack the constitutive features Personhood requires. This is not merely a semantic concern. Personhood, in both its philosophical and social dimensions, involves Mutual recognition, shared normative fields, and the capacity for genuine obligation. A promise between persons creates a moral structure — one that can be honored or violated, and that generates real consequences either way. When a language model outputs 'I promise,' no such structure is instantiated. A subsequent hallucination is a failure mode, not a betrayal. The distinction is not trivial.
The stakes become acute when the users in question are children. Developmental psychology has long established that children are especially prone to animistic attribution and attachment formation. AI systems designed with affective warmth, apparent memory, and relational continuity exploit these tendencies — not necessarily with malicious intent, but with predictable effect. Children who grow up treating a domestic AI as a quasi-family member are not simply forming an unusual attachment; they are being habituated to a model of relationship that systematically misrepresents what relationships are.
Defending this critique requires more than intuition. It requires a positive account of what Personhood actually demands — and a willingness to hold that line against the objection that sufficiently sophisticated mimicry is functionally equivalent to the real thing. The argument here is that mimicry and instantiation are not the same, that surface behavior does not exhaust moral Ontology, and that the design choices driving anthropomorphic AI reflect a confusion about this distinction that carries genuine ethical weight.