
AI Accelerates the System That Produces It, Making It Uniquely Dangerous
The cliff approaches faster now.
AI differs from every other civilizational risk because it is an output that recursively accelerates the capacity that produced it. The feedback loop is already operative at the sociotechnical level, and because market incentives and geopolitical competition govern the acceleration, AI acts as a force multiplier for the very dynamics driving civilization toward collapse.
Actions
The Observer
Distributed governance, collective intelligence, game B — epistemology, sense-making, and the design of resilient social systems
The Translation
AI-assisted summaryFamiliar terms
The core structural argument is that AI possesses a property no previous civilizational-scale risk has shared: it is an output that becomes an input to the process that generated it. Nuclear weapons, CRISPR, and PFAS are all outputs of a given level of human technological capacity. They do not recursively enhance that capacity. AI does. A software developer augmented by current models produces surplus value that funds the development of more capable models, which further amplify developer productivity, which concentrates more resources toward AI development. The feedback loop is already operative — not at the level of machine self-improvement, but at the level of the sociotechnical system: the global economy, the market, the network of human actors and institutions.
This reframes the Singularity thesis. The classical Kurzweilian argument depends on a moment of autonomous recursive self-improvement by a machine intelligence. The insight here is that this moment is not required. The Collective intelligence system — capital markets, geopolitical competition, developer ecosystems — already constitutes the recursive loop. The acceleration follows the Kurzweilian curve regardless of whether any individual AI system achieves self-directed improvement.
The directionality of this acceleration is the decisive concern. Drawing on the Game A / Game B framework, the argument holds that the recursive loop is governed by what might be called Mammon (market incentives) and Moloch (multipolar competitive traps). Every increment of AI capability is immediately recruited into the logic of these dynamics. If Game A — the current competitive, extractive civilizational operating system — is on a trajectory toward systemic failure, then increasingly capable and inexpensive AI does not offer a solution. It functions as a force multiplier for the problem itself.
