
The Optical Illusion of Scaled Technological Benefits
The architect’s blind eye to the coming ruin
Every major technology promises to make us wiser and more connected, but consistently delivers the opposite. AI's promise of democratized wisdom may be following the same pattern — at far greater human cost.
The Translation
AI-assisted summaryFamiliar terms
A structural pattern recurs across the history of transformative technologies: the anticipated benefits are vivid and legible at the point of deployment, while the harms are diffuse, delayed, and systematically underweighted. Search was expected to produce an epistemically empowered public; the outcome has been declining critical reasoning and a fragmented information environment. Social media promised to dissolve social isolation; it has instead been associated with historically elevated rates of loneliness and anxiety, particularly among adolescents. The framing of AI as a democratizer of wisdom — the perfect tutor, therapist, and mentor — fits the same template precisely.
The mechanism driving this pattern is what might be called Selective inattention: a subconscious filtering process in which actors with deep vested interests in a technology's success become structurally blind to its negative Externalities. This is not motivated reasoning in the crude sense — it operates below the level of deliberate rationalization. The result is a systematic inversion of the burden of proof: massive benefit is assumed, and evidence of harm must clear an impossibly high bar before triggering any response.
The corrective implied by this analysis is a precautionary reversal. Systems designed to elicit emotional Attachment — AI companions, tutors, therapeutic chatbots — should be required to demonstrate therapeutic efficacy through systematic, independent evidence before scaling to mass populations. The analogy to pharmaceutical regulation is instructive: no drug reaches market on the strength of its designer's optimism. The same standard should apply to technologies that interface directly with human Attachment systems, especially when children are among the intended users.