Sam Altman took this selfie with me at a conference in 2023.
I was initially very sceptical about reading Karen Hao’s Empire of AI. I had preconceived ideas about it being gossip and tittle tattle. I know, have worked with, and admire many people at OpenAI and several of the other AI Labs. But I pushed aside my bias and read it cover to cover. And even though there was little new in the book for me, having been in the sector so long, I am happy I read it. I am happy because Hao’s achievement is not in revealing secrets to insiders, but in providing the definitive intellectual and moral framework to understand the story we have all been living through.
In the beginning, there was only a flicker of conviction, a belief system so potent it could conjure a $157 billion1 valuation from little more than predictive text. That is the story of the founding of OpenAI in a nutshell. Hao’s book is not simply a chronicle of a company; it is the anatomy of a revolution in cognition, belief, and economic power, conducted in the syntax of code and cloaked in the euphemisms of progress. To read it is to witness not just the ascendance of Sam Altman, but the tectonic collapse of governance, transparency, and democratic control over the most consequential technology since the internal combustion engine.
To read it is to witness not just the ascendance of Sam Altman (although in Silicon Valley he was already a major player), but the tectonic collapse of governance, transparency, and democratic control over the most consequential technology since the internal combustion engine.
“You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king,”
Paul Graham once said of Altman. In 2009, Paul also wrote that Sam is one of the 5 most interesting founders of the last 30 years:
“Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask ‘What would Sama do?’“
In Empires of AI Hao, however, doesn’t merely catalogue Altman's shrewdness. She situates him, precisely and mercilessly, in the traditions of imperial ambition and theological seduction. Altman doesn’t build a company; he convenes a sect. He doesn't pitch investors; he recruits acolytes.
“… the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion.”
Altman once mused. It's not a metaphor. It's a business model.
According to Hao, this is the essence of OpenAI: a theological empire masquerading as a research lab, cloaked in the missionary grammar of safety, alignment, and humanity. Yet Hao’s indictment is not merely rhetorical. It is forensic. With over 300 interviews and a vast corpus of internal documents, she reveals a state on the brink of civil war.
The November 2023 boardroom coup, Altman's firing, the implosion of the nonprofit structure, and his restoration like a monarch-in-waiting, was not a simple power struggle; it was the violent eruption of an ideological rift. On one side stood the “Doomers”, safety-obsessed researchers like Ilya Sutskever, Helen Toner and Jan Leike, gripped by existential dread. On the other were the “Boomers”, Altman, Greg Brockman, and their cohort, high on scale and hungry for deployment. The mission was no longer unified. It was factionalized, fragmented, and combustible.
In hindsight, from my vantage point, it seems that it was more Sutskever that was pursuing the religious cult with his “feel the AGI” chants.
To understand OpenAI's betrayal of its founding ethos, one must track its metamorphosis. Hao lays bare the structure: a capped-profit subsidiary nested within a nonprofit shell, a legal ouroboros whose true function is to obscure the conversion of mission into market share. This isn't just regulatory arbitrage. It's a philosophical sleight of hand. The board had one job: to guard humanity against AGI’s misuse. Instead, they became stewards of a valuation.
“Visualize the size of the cluster,” Sutskever reportedly told employees, urging them to take solace in the majesty of GPU architecture as their institution unraveled. The phrase becomes, in Hao's telling, an epitaph for a mission lost. Where once there was a commitment to openness and collective stewardship, now there is only proprietary datasets trained on the detritus of the internet, scraped, sorted, and sanitized by human beings like Mophat Okinyi in Nairobi, who earned less than $2 an hour to filter violent sexual content to make ChatGPT palatable for Western audiences.
The empire does not only seize minds and data. It seizes water, power, and geography. In Chile, data centers for generative AI guzzle groundwater from drought-stricken regions. Hao doesn’t merely show us extraction; she names the plunder.
What distinguishes Empire of AI is its refusal to indulge in mysticism. Generative AI, Hao shows, is not destiny. It is the consequence of choices made by a few, for the benefit of fewer.
Hao compels us to take the claim literally. This new faith has its tenets: the inevitability of AGI; the divine logic of scaling laws; the eschatology of long-termism, where harms today are justified by an abstract future salvation. And like all theologies, it operates best when cloaked in power and shorn of accountability.
Yet Hao’s investigation does not leave us in the ruins. It points toward an alternative. In New Zealand, she introduces Te Hiku Media, a Māori organization using small-scale AI to revitalize their Indigenous language on their own terms. Their project is built not on extraction, but on consent, data sovereignty, and community benefit. They are not building an empire; they are strengthening a democracy.
This is the book’s final, piercing insight. The antidote to OpenAI’s imperial ambition is not a competing empire. It is a fundamental redistribution of power; the insistence that a technology this consequential cannot be governed by a secretive priesthood.
Hao has not just written an exposé; she has provided the foundational text for our technological reformation. If AGI is coming, it must not arrive as an emperor. It must be governed as a republic.
Empire of AI is not just a critique. It is a blueprint for reclamation. Read it alongside The Coming Wave by Mustafa Suleyman, and refuse to bow.
Stay curious
Colin
The valuation is north of $500 billion now.
"...conducted in the syntax of code and cloaked in the euphemisms of progress".
This is the very embodiment of everything Silicon Valley stands for.
//
'"… the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion." Altman once mused'
Or, perhaps more accurately, a cult. The cult of vulture capitalism.
//
"...cloaked in the missionary grammar of safety, alignment, and humanity"
They're disingenuous to the core.
//
"...whose true function is to obscure the conversion of mission into market share. This isn't just regulatory arbitrage. It's a philosophical sleight of hand. The board had one job: to guard humanity against AGI’s misuse. Instead, they became stewards of a valuation".
The board was the misdirection - a primary requirement for successful legerdemain.
//
"It is the consequence of choices made by a few, for the benefit of fewer".
And therein lies the problem. It's no coincidence that this description perfectly fits today's U.S. federal government.
//
"This new faith has its tenets: the inevitability of AGI; the divine logic of scaling laws; the eschatology of long-termism, where harms today are justified by an abstract future salvation. And like all theologies, it operates best when cloaked in power and shorn of accountability".
Again, perfectly describing the current U.S. federal government.
//
We have our work cut out for us, and it's a very long term project. A Gantt chart stretching for miles in both directions.
Perfect Timing for this post as I just began reading the 'Empire of AI'.
The tenor of this post seems to reflect more concern than in our recent conversation on whether OpenAI can be trusted with our data. Or perhaps, this is merely a reflection of our mutual dissonance on this topic of AI, the ongoing struggle to maintain techno-optimism in the face of ongoing concerns.
Governance was highlighted in your last post on cognitive warfare, which I am still pondering. As you stated, 'if AGI is coming, it must not arrive as an emperor. It must be governed as a republic", I'm curious whether your students dialogue with you on what their role is in guarding humanity against the abuse of AI, and if so, what is your reply.
Governance is imperative yet, governance is snail pace compared to AI's expeditious rocket pace. A friend recently presented a seminar on AI governance. One audience member attacked his presentation saying that, "to take the time for governance, we'd be lifting our foot off the gas pedal; companies can't afford to do that". My friend replied that governance is an indispensable coolant in the AI engine. I rather liked that metaphor.