
Defection Logic in the AI Arms Race
Building gods on a foundation of fear
The AI race isn't really between nations — it's a corporate prisoner's dilemma where every actor defects because they fear someone else will. The result is the most powerful technology in history being built under maximum pressure to skip safety.
The Translation
AI-assisted summaryFamiliar terms
The dominant framing of AI competition as a geopolitical race between nation-states obscures the more structurally significant dynamic: a cascading series of corporate defections driven by a self-fulfilling belief in inevitable development. The logic — 'if I don't build it, someone else will' — functions as a coordination trap. Each actor, reasoning individually, concludes that unilateral restraint is irrational, producing collective outcomes that no individual actor would endorse if choosing for the group.
The genealogy of major AI labs illustrates the mechanism directly. OpenAI was founded partly out of distrust of Google's stewardship of transformative AI. Anthropic was founded out of distrust of OpenAI. Each departure was justified in safety terms, yet each also added a new well-funded competitor to the race. Crucially, the China threat — now central to AI investment narratives — was substantially amplified by actors with incentives to accelerate, and that amplification appears to have materially increased Chinese investment in AGI-oriented research, making the threat more real in the process.
The result is a resource allocation structure that is almost paradoxical in its imbalance: the ratio of capability investment to safety and interpretability research runs approximately 2,000 to 1. This is not a market failure in the ordinary sense — it is a defection equilibrium in a multi-player game with no binding coordination mechanism. What distinguishes this from prior technological arms races is the combination of recursive self-improvement potential, deep inscrutability, and the absence of any historical precedent that would allow actors to credibly commit to restraint.