Will AGI destabilize liberal institutions?
Throughout history, the great transformations, the ones that wrench societies from their old orders and thrust them into the unknown, have always ridden on the backs of new technologies. The printing press shattered the Church’s monopoly on knowledge, the steam engine unshackled economies from the limitations of human and animal toil, and the transistor buried distance under an avalanche of instantaneous communication. Now, artificial general intelligence (AGI) stands poised to force yet another reckoning, one that could reshape governance.
For AGI I use the term defined by OpenAI in its mission statement:
“…by which we mean highly autonomous systems that outperform humans at most economically valuable work.”
A new paper by researchers, including Google DeepMind strategy scientist (Seb Krier), AGI, Governments, and Free Societies, warns of the dangers ahead with a clarity befitting a moment of great historical consequence. It builds upon Daron Acemoglu and James Robinson’s ‘narrow corridor’ framework, an intellectual model that holds that societies oscillate between the twin threats of anarchy and tyranny.
Too little state power, and we slide into lawlessness. Too much, and we descend into the grasp of the Leviathan, the state so powerful it suffocates liberty. AGI, the authors argue, will likely drive societies toward one of these extremes. Indeed, non other than Russia's Putin has stated: “The one who controls AGI, controls the world.” (BRICS countries are pursuing their own mandate on AI)
The stakes could not be higher. AGI’s ability to automate discretionary decision-making within bureaucracies, restructure governmental institutions, and transform democratic participation means that it will be the most powerful technology to shape the state since the birth of the modern bureaucratic machine. The choices made now will determine whether AGI strengthens liberal democracy or smothers it under the weight of unaccountable automation.
The Despotic Leviathan
Historically, totalitarianism has always been constrained by inefficiencies. The Soviet Union, despite its centralized planning, could not gather and process enough information to make its economy work. East Germany’s Stasi, while terrifyingly effective at infiltrating private lives, could not surveil every citizen simultaneously. There were limits to how much control a state could exercise because human bureaucrats and analog technologies had finite capacities. AGI eliminates these limits.
Imagine a government that can instantaneously analyze every financial transaction, every conversation, every movement of every citizen. Already, China’s social credit system provides a rudimentary version of such governance. AGI supercharges this apparatus. Automated systems could predict and preempt dissent, removing the need for the heavy-handed brutality of the past. The new authoritarianism, enabled by AGI, will be subtle and seamless. It will not need gulags; it will simply make rebellion unthinkable.
Even democratic states are not immune. Intelligence agencies and law enforcement bodies have already integrated AI-driven surveillance. Facial recognition in public spaces, predictive policing algorithms, and automated drone warfare are only the beginning. Without stringent safeguards, these tools could create a world where democracy exists in name but not in practice. The technology that promised to free humanity from drudgery could, paradoxically, create an inescapable digital prison.
The growing global competition over AI dominance is similar to past arms races, but with even more dire implications. This is the warning of three other AI heavyweights, Hendrycks, Schmidt (former Google CEO), and Wang who argue:
“Superintelligent AI surpassing humans in nearly every domain would amount to the most precarious technological development since the nuclear bomb.”
They introduce the concept of Mutual Assured AI Malfunction (MAIM), a strategic deterrence model similar in nature to the nuclear Mutual Assured Destruction (MAD). Any state attempting a unilateral grab for AI supremacy faces the likelihood of preventive sabotage by its rivals. They write, underscoring the urgency of geopolitical AI stability.
“AI is no longer just an economic or technological frontier; it is a matter of national security.”
When States Lose Control
At the other extreme lies the opposite peril: the state becoming irrelevant. If AGI is democratized, if individuals and non-state actors gain access to capabilities once reserved for governments, traditional institutions of power may wither. This might sound liberating, but history warns otherwise.
Take the decline of the Roman Empire. As centralized authority crumbled, power did not dissolve into a libertarian utopia but fractured into warring fiefdoms. Security became privatized, justice arbitrary, and daily life defined by instability. If AGI enables rogue actors, corporations, wealthy individuals, criminal syndicates, to wield more intelligence and power than the state, the social contract could collapse.
This is no distant fantasy. AI-driven financial trading algorithms already operate at speeds no human regulator can match. Private military contractors employ semi-autonomous drones. Decentralized hacking collectives challenge state cybersecurity infrastructure. AGI’s capabilities will magnify these disparities, giving private entities and shadow organizations unprecedented influence. In such a world, governance ceases to be a function of democracy and instead becomes a marketplace where the most powerful players dictate the terms.
Hendrycks, Schmidt and Wang warn that AI systems could become the “ultimate force multiplier” for malicious actors, lowering the barriers for bioweapon development and critical infrastructure attacks. AI-assisted cyberwarfare would allow smaller, non-state groups to inflict disruption on the scale once reserved for nation-states.
The Narrow Corridor
If neither the despotic Leviathan nor the absent Leviathan is desirable, what remains? The authors argue that free societies exist only in the narrow corridor between these two extremes, a delicate equilibrium requiring constant recalibration. The rise of AGI means that balance is now at risk, but not yet lost.
The key, they suggest, is institutional adaptation. Governments must integrate AGI in ways that enhance, rather than replace, human oversight. Privacy-enhancing technologies must be prioritized, creating systems where surveillance is constrained by technical and legal safeguards. Algorithmic transparency must be non-negotiable.
Decision-making by AGI should always be subject to human appeal.
To ground this in concrete examples, at the EU policy committee we have discussed a digital twin system where AI simulates policy outcomes before implementation. Citizens could use AGI-driven platforms to propose laws, which are then stress-tested for economic and social impacts. This level of engagement could transform democratic participation from an intermittent electoral event to a continuous, responsive system.
Democratic feedback loops must also be strengthened. Traditional representative democracy, built for an era of handwritten ballots and deliberative debate, may be insufficient. New models, such as AI-assisted citizen assemblies, where algorithms help synthesize public input and propose policy options, or liquid democracy mechanisms where individuals can delegate votes dynamically based on expertise and trust networks, must be explored. If states do not modernize their decision-making processes, they will lose legitimacy, and with it, the ability to govern effectively.
At its core, the survival of liberal democracy in an AGI-driven world will depend on one fundamental question: who controls the technology? If AGI remains a tool of the few, whether governments or corporations, then free societies are at risk. If it is designed to empower the many, there remains hope.
Hendrycks and colleagues also emphasize that the state’s ability to control AI-driven arms races is paramount:
“Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change.”
They argue that AI policy should revolve around three pillars: deterrence, nonproliferation, and competitiveness, each crucial for maintaining stability.
The Role of International Cooperation
While this discussion has largely focused on national-level governance, AGI is not bound by borders. Unlike industrial-age technologies, which were contained by geography, AI spreads at the speed of data. Any effort to maintain democracy in the AGI era must extend beyond national policies to international governance frameworks.
Multilateral agreements on AGI safety, transparency, and equitable access must be pursued, much like nuclear non-proliferation agreements. In the Hendrycks paper they note that
“AI proliferation to rogue actors could enable widespread catastrophe.”
Governments must track AI chip inventories, restrict exports to hostile states, and implement strict security measures to prevent unauthorized access to powerful AI systems. Without cooperative oversight, some nations will exploit AGI for unchecked power, while others will fall behind, creating geopolitical imbalances that fuel conflict rather than stability.
The Next Social Contract
Is it just scaremongering? I don’t believe it is. Having knowledge of Palantir systems, I say absolutely not. This is something that Palantir Alex Karp also discusses in his recent book, The Technology Republic which I wrote about here. Those that own AGI or, heaven forbid, Superintelligence, will have significant control over the world.
We are not the first generation to face a crisis of governance. The Industrial Revolution forced the creation of new laws, labor protections, and social safety nets to prevent societal collapse. The nuclear age demanded the construction of international treaties and safeguards. Now, as we enter the AGI era, decisions made in this moment will determine whether this technology is wielded against us or shaped to serve democratic ideals.
These two papers should be widely read and discussed, the authors analysis does not guarantee success, but it reminds us that the outcome is not predetermined. Social, political, and economic forces will shape AGI’s impact as much as the technology. The guardrails and institutions built now will determine whether the people that control AGI become liberators or tyrants.
The choice is ours. But the window is closing.
Stay curious
Colin
Image from this video of Palantir
The Chief People Officer of OpenAI (ChatGPT) writes that there is soon to be a rapid societal transformation triggered by AGI, one largely underestimated by the general public. This transformation will significantly restructure economies as AI replaces analytical and white-collar jobs. Consequently, society will witness economic shifts emphasizing emotional intelligence, creative expression, and human-centered occupations like healthcare, social services, and arts.
Psychologically and culturally, humans will likely redefine self-worth, identity, and educational priorities, shifting from cognitive and analytical excellence to creative and emotional connectivity. However, this shift is fraught with potential societal destabilization, ethical dilemmas, and identity crises, especially without proactive preparations and adaptations.
Furthermore, paradoxes and tensions will likely surface, especially as society oscillates between optimism (AI solving critical challenges) and skepticism (ethical concerns, potential inequities, and human displacement). Navigating these paradoxes requires nuanced, imaginative, and balanced thinking to effectively integrate AI into societal fabric.
https://medium.com/@juliavillagra/thoughts-on-ai-490917d8553c
This is an excellent synopsis of the pivotal moment humanity currently faces. Those who submit to life within a digital mind-scape may find themselves trapped in a place where only total collapse of the controlling technology will bring them back to a place where they can reconnect with a higher plane of consciousness. It will require the return of the ancient rishis to bring that knowledge back. To once again whisper in the ears of those who will listen and reawaken that Divine spark. For some this may take thousands of years.