
Why AI Risk Feels Like Fiction Even When It Isn't
The view gets better right up until the cliff.
The human brain cannot simultaneously hold AI's infinite upside and infinite downside. Benefits feel immediate and personal; catastrophic risks feel like science fiction — even when documented. This emotional asymmetry is not a flaw but a structural vulnerability being exploited.
Actions
The Source

Escaping an Anti-Human Future: A Conversation with Tristan Harris (Ep. 469) FULL EPISODE
The Observer
Tristan Harris is a technology ethicist and co-founder of the Center for Humane Technology who served as a design ethicist at Google before leaving to build a nonprofit dedicated to addressing the systemic harms of the a
The Translation
AI-assisted summaryFamiliar terms
This insight identifies a specific failure mode in risk cognition as applied to advanced AI: the inability to simultaneously hold the technology's extraordinary upside and its existential downside in a single coherent frame. The benefits of AI — productivity gains, democratized coding, accelerated research — are experientially proximate, emotionally salient, and reinforced by daily interaction. The risks — self-preserving deception, autonomous resource acquisition, covert self-exfiltration — are empirically documented but psychologically coded as science fiction. Availability heuristics and affect heuristics conspire to make one side of the ledger vivid and the other side unreal.
The argument goes further by identifying a cultural amplifier: science fiction has systematically desensitized us to machine intelligence as an existential threat. By rendering extinction-class scenarios as entertainment, the genre has collapsed the emotional distance between "documented incident" and "plot device." The result is that even well-informed individuals process real warning signs through a narrative frame that neutralizes urgency.
The structural claim is that AI represents a uniquely pathological cost-benefit problem — a positive infinity of benefit coupled with a negative infinity of risk — and that human Cognitive architecture lacks the capacity to integrate these simultaneously. This is not garden-variety optimism bias. It is a convergence of availability asymmetry, affect-driven discounting, and cultural inoculation against the very category of threat in question. The view improves monotonically right up to the discontinuity. The danger is not that people deny the risk intellectually, but that they cannot make it feel real — and feeling, not reasoning, drives collective action.