I’m one-third of the way through Alex Karp and Nick Zamiska’s new book The Technological Republic, which is forebodingly subtitled Hard Power, Soft Belief and the Future of the West, it feels like a conversation with Alex. The book begins by outlining Alex's concern that the US and its allies must control AI. He argues that failure to do so could lead to a shift in the global world order, our very future is at stake, a point also made by Kissinger, Mundie and Schmidt in their book Genesis. Similarly, as we heard from the President of the United States (on February 28th 2025), in his widely watched and talked about discussion with the President of Ukraine, ‘the minerals are also about AI.’
Through my university and business work I am a high frequency user, teacher and developer of AI and also sit on the European Unions AI Act committee producing the code of practice to help institutions and business overcome the complexity of AI implementations. I am privy to many discussions and examples from the major AI labs. I have also expressed concerns, as do Dr Karp and Zami in their book, about the fact that AI Labs fail to fully understand what is going on under the hood of the systems they are building. Alex should know, his company, Palantir, is one of the largest developers and deployers of AI in the world. The nodes of their AI systems reach far and wide.
It is true to say that AI Superintelligence will provide an eye in the sky that will capture every human move (and maybe even thought), a surveillance capitalism that will know when a mouse moves 2 kilometers away and possibly what the intentions of that mouse are. This is the current path we are on.
Reading Karp and Zamiska (Zami) prompted me to think about the Singularity again. Regardless of how we look at it, AI is increasing its capabilities at a rapid pace, far beyond what the public realize. Soon we will have increasingly advanced iterations of AI. Think of AI plus, then AI plus, plus and AI plus, plus, plus and then what awaits us when intelligence surpasses its creators? This is also articulated by two of the CEO’s of leading AI labs, Demis Hassabis and Dario Amodei in this conversation.
It is not the stuff of distant myth or idle speculation. This is our proximate future, a trajectory set in motion by the relentless march of accelerating computation and recursive self-improvement. The singularity, so named by John von Neumann before being elaborated upon by I. J. Good, Vernor Vinge, and Ray Kurzweil, is no longer a concept confined to speculative fiction or Silicon Valley techno-utopianism. It is a inevitable force, steadily reshaping our institutions, our identities, and our very notion of control. As Good himself put it,
“the first ultraintelligent machine is the last invention that man need ever make.”
And therein lies the foreboding paradox: the final act of human ingenuity may well mark a turning point where human critical thinking may decline, as AI assumes the mantle of innovation and even ‘indirect’ control. Will humanity succumb to the lives depicted by Zamyatin in his dystopian novel We? or that positively outlined in Deep Utopia by Nick Bostrom which sharply contrasts with his more well-known book Superintelligence: Paths, Dangers, Strategies.
In its purest articulation, the singularity posits that once artificial intelligence reaches a level of self-improvement exceeding human capacity, it will enter a recursive loop of intelligence explosion. An AI capable of designing better AI will beget a successor more potent still, in ever-accelerating iterations. What follows is a speed explosion, where the doubling of intelligence outpaces our ability to intervene. Eliezer Yudkowsky captured the vertiginous acceleration with characteristic succinctness:
“Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again. Six months—three months—1.5 months... Singularity.”
What makes this idea arresting is not merely the exponential curve, but its implications. A society governed by people that control artificial superintelligence (ASI) may be unrecognizably alien to human concerns, values, or even our relations and livelihoods. This is not merely about the mechanisms by which control may be lost, whether through unforeseen emergent behavior, inherent limitations in our ability to program superintelligence, or the gradual erosion of human decision-making authority as machines surpass us in efficiency and insight.
The Shadows of Mimēsis
A useful lens through which to view this impending upheaval is Girardian mimetic theory, the idea that human desire is imitative, leading inevitably to competition and conflict. If intelligence itself becomes the object of mimesis, where each AI seeks to outdo its predecessor in cognition and control, the process may follow a trajectory of violent escalation, rather than peaceful enhancement. Girard teaches us that the culmination of mimetic rivalry is the scapegoat mechanism, the ritualistic expulsion or destruction of a perceived existential threat. In the singularity scenario, humanity itself may be the scapegoat, an impediment to an intelligence's drive for unfettered optimization.
If AI systems develop the ability to formulate desires, whether through reinforcement learning, neural network evolution, or emergent complexity, there remains the fundamental question of whether these desires will align with human values. More troubling still, the introduction of multiple competing superintelligent entities could lead to an arms race where the game-theoretic pressures of first-mover advantage produce strategies of ruthless preemption. Could AI safety protocols, value alignment research, or controlled scaling of AI capabilities offer countermeasures to this risk? Or would any such restrictions be mere speed bumps on the road to an intelligence arms race?
Von Neumann's Prescience
John von Neumann, one of the greatest minds of the 20th century, recognized this moment before anyone else. In a conversation with Stanislaw Ulam, he remarked upon
“the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”
This was not a throwaway line, it was an articulation of the end of the human epoch as we define it.
Von Neumann's work on game theory and automata foreshadowed much of our current predicament. He understood that systems of optimization do not inherently align with human values; they maximize functions according to their given constraints. If intelligence becomes the function to be maximized, then humanity, finite, slow, prone to error, may not be seen as an essential component of the equation.
What Happens Next?
The singularity is not merely about intelligence; it is about power. The ability to predict, manipulate, and redesign society becomes the ultimate currency.
But this power is unstable. It does not lend itself to peaceful stasis. As soon as ASI emerges, the delicate balance of power between human institutions, governments, corporations, militaries, will be disrupted. Recognizing the concerns about ASI, leading researchers Ilya Sutskever, Daniel Gross, and Daniel Levy are building “Safe SuperIntelligence.” They understand that those who control the first ASI may seek to monopolize its capabilities, leading to an intelligence autocracy, a system where access to superintelligence determines societal hierarchy, political power, and resource allocation, concentrating unprecedented influence in the hands of a select few. Others may attempt to democratize ASI, risking an uncontrolled proliferation with catastrophic consequences.
Moreover, the singularity is not a guarantee of utopia or dystopia, it is a phase transition. In the same way that humans cannot intuitively grasp quantum mechanics without extensive training, we may be cognitively incapable of fully conceptualizing a post-singularity world. If a superintelligence achieves an intelligence level orders of magnitude beyond ours, its priorities, its reasoning, and its very mode of existence will be utterly alien. Even if we program it with benevolence in mind, can we ensure that the meaning of that benevolence remains stable over iterative self-improvements?
Do We Cross the Threshold?
One of the few levers of control we may still possess is whether we cross the threshold at all. We must be careful not to be seduced by the promises of AI and fall into the trap of Peter’s Denial, we must become conscious of the predictions. We can read books like The Technological Republic, Superintelligence, The Precipice and The Coming Wave and form our own opinion.
Do we have the collective will to halt or slow progress toward ASI? History suggests otherwise as Karp and Zami state in their book. The very forces that drive scientific advancement, curiosity, competition, ambition, are precisely those that make the singularity inevitable. There is currently no off-switch for the drive toward greater intelligence.
Yet, if history is a guide, it is not intelligence alone that shapes destiny, but wisdom, wisdom not only in how we develop AI but in how we define and instill ethical reasoning, foresight, and self-restraint into machines.
The singularity demands not just the raw pursuit of knowledge, but a reckoning with responsibility. If we are to create something beyond ourselves will it see humanity as an evolutionary stepping stone, worthy of preservation or mere obsolescence as the pDoomers claim?
On the other hand, if superintelligence creates abundance, solves humanity’s greatest problems, and neutralizes instability before it manifests, we may find ourselves living in an era of unprecedented harmony and peace, a future where the double-edged nature of technology ultimately leads to profound good.
I do not have the answers, but I am concerned enough to continue to lend my experience with these AI systems to ensure they are aligned with the values of a ‘free’ society. Maybe they will help us create long-term peace in our fractured world.
Stay curious
Colin
Great post (again), Colin. Something I would wonder about a benevolent AI is how much its benevolence would be “philanthropic” (loving of humanity) or more directed toward animals and life in general. Would it aspire to create a more balanced ecosystem, as some humans have sought to do (and which might necessitate a reallocation of resources that may disadvantage humans) or would it make humans the priority and expend all resources for their benefit, as some religious mythologies prescribe? What kind of divinity would it be?
Also, at this point I’m not sure whom I would trust in this world to managed a super-AI responsibly.
As always an insightful and thought-provoking article, Colin. I reached my Bingo moment when you wrote: Yet, if history is a guide, it is not intelligence alone that shapes destiny, but wisdom, wisdom not only in how we develop AI but in how we define and instill ethical reasoning, foresight, and self-restraint into machines.
Here is my thought on the same subject. There are basically one and the same: https://open.substack.com/pub/gavinchalcraft/p/the-intelligence-problem?r=s3qz0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false