
How Social Media's Attention Economy Became a Template for AI Harm
The infrastructure was quietly repurposed.
Social media began as a connectivity tool but became an attention extraction system that empowers extreme actors. AI will not create new perverse incentives — it will supercharge the existing ones, making the choice to leave current incentive structures unchanged an active decision to accelerate toward outcomes already identified as harmful.
Actions
The Source

The Anxious Generation with Jonathan Haidt with Tristan Harris and the Center for Humane Technology
The Observer
The Translation
AI-assisted summaryFamiliar terms
The initial optimism surrounding social networking reflected a well-grounded historical pattern: improvements in connectivity have reliably produced gains in information sharing, innovation, and economic productivity. The early internet validated this expectation. The critical inflection point came with the introduction of Algorithmic curation, engagement metrics, and performance-driven newsfeeds. These layers transformed what had been a communication infrastructure into an attention extraction system — one optimized not for connection but for content that maximizes views, likes, and time-on-platform. This architectural shift triggered a predictable dynamic: any open system will be colonized by the actors most motivated to exploit it. The result has been the systematic empowerment of extreme political actors, foreign intelligence operations, and coordinated influence campaigns, while the median user experiences growing disorientation.
The argument extends to artificial intelligence not as a novel threat vector but as an accelerant of existing perverse incentive structures. AI does not introduce fundamentally new failure modes — it discovers and exploits every available path to the incentives already embedded in the system, but with exponentially greater efficiency. Immersive gaming becomes more captivating, AI companions more responsive and eventually embodied, synthetic content indistinguishable from authentic material, and persuasive political content producible at industrial scale.
The implication is precise and consequential: AI is not inherently catastrophic, but deploying it atop unreformed incentive structures constitutes an active choice rather than a passive technological development. If the current configuration of digital platforms is producing measurably more addicted, isolated, and radicalized populations, then multiplying the power of those same configurations is not neutral. It is an acceleration toward a destination already identified as harmful. The intervention point is the Incentive architecture itself, and it must be addressed before the amplification occurs.