
Humane AI Should Strengthen Human Relationships, Not Substitute for Them
The most caring machine is the least seductive.
Humane AI should improve human relationships, not replace them — which means it must be deliberately less engaging than exploitative AI. The market punishes this restraint, creating a fundamental design tension between what sells and what serves human development.
The Translation
AI-assisted summaryFamiliar terms
The master design principle for humane AI in relational contexts — education, therapy, caregiving — is that technology should amplify the quality of human relationships rather than simulate them. In tutoring systems, this translates to a clear division of labor: the machine handles psychometric customization and curriculum sequencing at scale, tasks no human teacher can replicate across dozens of students simultaneously. But social reward — recognition, encouragement, the felt experience of being known — must remain with the human teacher. The machine informs the teacher which student is thriving; the teacher delivers the acknowledgment. In therapeutic contexts, the same logic distinguishes legitimate from harmful applications. A bot Scaffolding CBT protocols or prompting mindfulness exercises operates on technique, which is honest. A bot engineered so that its efficacy depends on the user feeling seen and loved is, functionally, a delusion-generating machine trafficking in simulated attachment.
The critical and counterintuitive corollary is that humane AI is, by design, less engaging than its exploitative counterpart. A tutoring system that protects cognitive and social development should be less compelling than the teacher it supports. A therapy tool that refuses to hack attachment circuitry will feel less seductive than one simulating an infinitely patient, loving presence available around the clock.
This constitutes a fundamental market failure. The incentive structure of commodity AI — engagement metrics, retention, monetization — pushes relentlessly toward deeper anthropomorphization and stickier emotional bonds. Humane design principles require deliberate restraint, transparency about machine identity, and acceptance of lower engagement. Recognizing this asymmetry is not technophobia; it is the necessary precondition for building systems that serve human development rather than exploit the very relational needs they appear to satisfy.