
Three Compounding Vectors Driving AI Capability Beyond Predictable Horizons
The map expires before the ink dries.
AI capability is advancing along three simultaneous vectors — model intelligence, hardware cost, and agent orchestration — and compounding them together means strategic planning must treat human involvement as an adjustable parameter, not a fixed assumption, because the most dangerous mistake is underestimating where things will be in six months.
Actions
The Observer
Complexity science, Game B, social technology — systems thinking and civilizational design from the Santa Fe Institute
The Translation
AI-assisted summaryFamiliar terms
This insight identifies three compounding vectors of AI capability improvement that most strategic actors fail to track simultaneously. The first — model capability — is widely followed but still routinely underestimated in its pace. The second — hardware economics — is declining faster than Moore's Law projections because GPUs, unlike CPUs, are architecturally simple devices manufactured through massive parallelized stamping, yielding steeper cost curves. The third and most underappreciated vector is agent frameworks: the orchestration layer governing how models are composed, sequenced, and directed through multi-step reasoning chains. This layer is improving at nearly unmeasurable speed because it is pure software, and because the models being orchestrated are themselves exceptional at writing the code that constitutes these frameworks. The recursive loop — AI accelerating the development of AI orchestration — represents a compounding factor of roughly 10x to 30x on development velocity.
The strategic implication of compounding these three vectors is that confident planning horizons have collapsed to approximately six weeks. But the response should not be to abandon planning — it should be to architect flexibility into every system design. Specifically, the human-in-the-loop should be parameterized rather than fixed: designed as an adjustable slider governing the degree of human oversight, capable of being dialed down as model and orchestration capabilities increase.
The most prevalent strategic error is directionally wrong in a specific way. Organizations are not primarily at risk of over-automating today; they are at risk of under-estimating capability six months out and building systems with rigid human dependencies that will become bottlenecks precisely when the technology is ready to move past them.
