Our Necessary AI Delirium
In 1939, with Britain facing annihilation, Alan Turing persuaded his government to fund a contraption few understood: the Bombe, a clattering, room-sized machine built to guess the wiring of the Nazi Enigma. It was costly, speculative, and to many, a mathematician’s fantasy. Yet that fantasy shortened the war by years. Turing’s wager is a reminder that civilizational leaps are rarely the products of prudent budgeting. They are bets on the improbable, backed by a kind of collective delirium that only looks rational in hindsight.
Today, we face a similar wager with artificial intelligence. The question of whether we are overspending cannot be answered with balance sheets alone. It is a bet on our collective imagination. Reading Byrne Hobart and Tobias Huber’s Boom: Bubbles and the End of Stagnation, one is struck by their central heresy: that bubbles, irrational, frenzied manias, are not pathologies but necessary accelerants of progress. Without them, they argue, stagnation hardens into a terminal condition.
The Heresy of the Productive Bubble
No rational cost-benefit analysis would have justified the Manhattan Project, which spent tens of billions in today's money on an unproven physics experiment. Yet it transformed physics into geopolitics in three years. The Apollo Program was less an engineering inevitability than a collective hallucination funded by taxpayers underwriting an improbable vision. Both efforts were fueled by what Hobart and Huber call “reality-bending delusions”, the sort of thymos, or spirited drive, that civilization requires to escape entropy.
These were not just any bubbles. They were innovation-accelerating bubbles, phenomena that coordinate behavior, attract vast funding and talent, and enable the massive parallelization of innovation needed for transformative breakthroughs. They succeed because they foster a definite optimism about a specific, tangible future, creating a "reality distortion field" that makes the impossible seem achievable.
AI: Our Manhattan Project?
AI is our generation's Manhattan Project and Apollo rolled into one. But here the question of overspending acquires its sting. Unlike the uranium plants at Oak Ridge or the Saturn V rockets at Huntsville, much of AI’s capital is invisible, dematerialized into GPUs and data centers. Hobart and Huber warn of a culture that prefers simulations to reality, the hyper-real to the real. AI embodies this paradox: it produces dazzling simulations of intelligence even as our physical infrastructure crumbles.
Are we overspending? Yes, if the funding produces only more simulacra, chatbots with cleverer parlor tricks and financial markets whirring ever faster in recursive loops. But if AI’s excess funding yields a new substrate for discovery, as I have suggested recently, if it accelerates biology as AlphaFold did, or revolutionizes energy, medicine, or governance, then it will have justified its bubble.
The Danger of the Wrong Delusion
The key, as Hobart and Huber argue, is distinguishing between productive and destructive manias. The dot-com crash left us with fiber-optic cables and an internet backbone. It was an inflection-driven bubble, a bet that the future would be radically different. The 2008 housing bubble, by contrast, was a mean-reversion bubble, a leveraged bet that the recent past would continue indefinitely, which left only ruins.
Financialized froth without a technological substrate is nihilism disguised as investment. The critical question for AI is whether it is building a new reality or merely creating a more elaborate simulation of the old one.
The Real Civilizational Waste
The true overspending is not on AI but on what Hobart and Huber call the “ideology of stasis”. It's the trillions poured into bureaucratic quantification, financial engineering designed to suppress volatility, and a culture of safetyism that fears risk above all else. A trillion dollars allocated to index funds is more wasteful than a trillion dollars poured into GPUs, if the latter buys us even a fraction of a genuine transformation.
The question is less “Are we overspending in AI?” than “Are we willing to accept the risks of spending on the right kind of delusion?” As Hobart and Huber put it, only innovation-accelerating bubbles can prevent apocalypse. The danger is not excess but anemia. The worst future is not one where we squander billions on failed AI projects, but one where, out of fear of folly, we never attempt the great ones.
Stay curious
Colin



Hobart and Huber's praise quotes on Amazon are from Andreesen and Thiel, and those two certainly are transformational. But you can be for AI and still against their vision of it that promises more wealth to the wealthy and unspecified indirect benefits to everyone else. We're past any practical way to stop AI, so the question isn't gas vs brake, it's steering. If you were in a fast car, would you close your eyes and hit the gas? Where are we going?
I'm reminded of the data center that MuskRat built in Memphis, powered with methane. Aside from exacerbating global warming, and adding pollution to an area already beset with one of the highest asthma rates in the country, it's highest aspiration is to enrich MuskRat further.
Both The Manhattan Project and Apollo involved varying degrees of public/private partnership. We're not really currently seeing this with AI - it's entirely private, and profit driven. That's not to say that there can't be potential benefits in the future, as with AlphaFold and CRISPR. But for the time being, we seem to be heading primarily for more simulacra, chatbots with clever parlor tricks, and financial domination.