It’s 2024, and the landscape of technological progress has taken on the frenetic, jarring pace of a gold rush. There’s a febrile excitement in the air, both in San Francisco, the heart of this frenzy, and in the covert deliberations of government meetings across the world. The race for AGI is on, and if you ask the key players, those few hundred individuals with true situational awareness, they'll tell you it's going to be a wild ride. Now the US Congress has weighed in and indicated its top priority is a Manhattan Project for AGI.
The Commission recommends: Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would usurp the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress:
Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and
Direct the U.S. secretary of defense to provide a Defense Priorities and Allocations System “DX Rating” to items in the artificial intelligence ecosystem to ensure this project receives national priority.
What Is Really Happening Here?
To understand this fervor, let's strip away the spectacle and break down the fundamentals. AGI, Artificial General Intelligence, is not simply another internet-scale transformation. It's not about prediction algorithms getting better or recommendation systems becoming eerily precise. What we’re talking about is the culmination of an evolutionary leap from machines that mimic preschool-level linguistic abilities (remember GPT-2 just a few years ago?) to entities that can tackle complex mathematics, optimizing shipping logistics at highly profitable scale, and write code like a seasoned professional (like GPT-4, just a few years later). At the core of this phenomenon lies one simple trend: the more we scale these models, by orders of magnitude (OOMs) in compute, algorithmic efficiency, dollars (maybe a few hundred billion), and practical enhancements, the more intelligent they become.
However, assuming linear progress may be overly simplistic. There are significant challenges that could slow or even stall this trajectory. For instance, the availability of high-quality training data, the physical limits of energy consumption, and the diminishing returns of compute scaling all pose potential roadblocks. These are not insurmountable, but they do imply that progress might be less predictable and more uneven than the current trend suggests. Acknowledging these uncertainties allows for a more nuanced understanding of the path to AGI.
We are witnessing a new industrial mobilization, and it's one with no exit strategy. The stakes are enormous. By the end of the decade, compute clusters will outpace entire sectors of industry, energy production will soar, and trillions will be invested in building the infrastructure necessary to host this emergent intelligence. If the supercomputers are the engine, and deep learning the fuel, then what emerges by 2027 might not just be a step forward, it could be a machine of super-intelligence, smarter than the smartest human, capable of making its own scientific discoveries, and most dramatically, capable of improving itself.
What do I call Artificial General Intelligence (AGI)? OpenAI’s mission sums it best:
OpenAI’s mission is to ensure that artificial general intelligence (AGI), by which we mean highly autonomous systems that outperform humans at most economically valuable work, benefits all of humanity.
Let us think of AGI as the ability of artificial systems to perform all tasks that humans can perform.
A Crash Course in Counting the OOMs
Every year, the community has added half an OOM to both compute and algorithmic gains. It seems unassuming on paper, but what it means in practice is extraordinary. The models themselves, despite the sprawling transformer architectures and convoluted layers, have an innate quality: they want to learn. Scale them up, let them breathe computational power, and they will learn more. The intelligence progression from GPT-2 to GPT-4, where a few garbled Hans-Christian Anderson tales evolved into solving SAT-level math problems, demonstrates this point. This is the baseline, and if the projections hold, we’re aiming straight for AGI by 2027, no sci-fi required, just an acceptance of linear progress on a logarithmic graph.
But let’s not ignore the complexity here. There are bottlenecks to scaling, such as the availability of advanced semiconductor technology, the cost and logistics of expanding compute infrastructure, and the fundamental challenges of improving model alignment and robustness. These hurdles could introduce significant delays or require entirely new approaches to maintain the pace of progress. Moreover, as we push toward AGI, we need to deeply consider the social, economic, and ethical impacts that accompany such advances. How do we ensure that AGI benefits all of humanity, rather than exacerbating existing inequalities? It is vital to foster international collaboration at the government level, ensure equitable distribution of benefits, and develop global guidelines to manage the societal impacts responsibly. These are not just technical challenges; they require a concerted effort from policymakers, ethicists, and society at large. I do think that we have started, but I also think that the general public and many government entities are way too lackadaisical, carelessly lazy with their head a tad in the sand.
From AGI to a Super-intelligence Cascade
Here's where things get wild. The leap from human-level AGI to superintelligence may not take another decade; it may be a mere slipstream in time. Imagine hundreds of thousands of AGI instances each working to optimize and expand the boundaries of AI research. The result? A cascade, an intelligence explosion, where every few weeks we are compressing a decade of human progress into a matter of days. These predications have long been known from John von Neumann and then I.J. Good in 1965. But it’s not just the computational feats we have to worry about. This power, this sheer transformative capacity, is bound to spill into national agendas, geopolitics, and security concerns. Dario Amodei, the co-founder and CEO of Anthropic, says creating millions of copies of advanced AI’s will lead to "a country of geniuses in a data center" and 100 years of invention in science and engineering could occur in the next 5-10 years, including the cures of major diseases. This is highly positive and very, very encouraging, although at the same time Dario warns about the downsides. Those downsides are also well documented in the must read book by Google DeepMind co-founder and now CEO of Microsoft’s AI division, Mustafa Suleyman, in what he refers to as the Coming Wave and how containment is essential. How do you safeguard something that’s smarter than every human being put together? It’s a very real, very tense problem.
Interpretability
To illustrate, consider current AI safety research initiatives like OpenAI's work on reinforcement learning from human feedback (RLHF) and Anthropic's interpretability research, which both aim to ensure that AI systems remain aligned with human values. Interpretability, in particular, is a crucial yet challenging area. Anyone with a modicum of interest in the future should watch this short video of Anthropic scientists, who indicate they are growing a brain, and why interpretability is so important. In fact watch it ten times, share it widely and think how you can get involved. Making AI decisions transparent and understandable to humans is extremely difficult, especially as models grow more complex. Techniques like feature visualization and circuit analysis are promising, but we're only scratching the surface. We need more rigorous efforts to untangle the decision-making processes of these systems and ensure they can be effectively audited. The pace of development means we must accelerate these safety measures alongside capability advancements. The societal impacts of AGI, from job displacement and economic shifts to ethical dilemmas about autonomy, are vast and require proactive solutions. Sam Altman the CEO of OpenAI has stated:
If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.
We need, personnel retraining programs, economic safety nets, and new ethical frameworks are essential to address the disruptions that AGI will inevitably bring.
Where Do We Go From Here?
Yet, if we peel back the tension, if we look past the doom and dystopia of machine overlords, we see opportunity. The key figures building these models are not disconnected from the dangers. They see themselves, perhaps, in a light akin to Oppenheimer and Szilard, pioneers of boundless, disruptive power. Their situational awareness is that thin line between our world transforming into a technological utopia or slipping into chaos. I would encourage you to read Leopold Aschenbrenner’s informative series of essay’s on this subject.
To help guide us into this future, we need a radically simple concept. Rather than reacting to the spectacle, the flashy numbers, the trillion-dollar investments, or the hype, what we need is to distill every challenge to its foundational blocks.
How do you build alignment mechanisms for entities that could outthink their creators?
How do we, as a collective society, dictate values that an entirely new class of intelligence should hold sacred?
Can we maintain transparency and integrity amidst the competition?
These are not trivial questions, nor are the solutions easy. The models just want to learn. But what they learn, how they learn, and for whom they act will shape our decade, and perhaps, our ability to earn a living.
The Path to AI
The goal is not to stop the race, but to run it wisely, to understand not only the power of what we create but the responsibilities it entails. Expanding our discussions, in every country, to include societal impacts, ethical considerations, and the human response to AGI is more important than ever in ensuring that the transition is as smooth and beneficial as possible. This means focusing on human-centered design principles, how can AGI serve people’s real needs, and how do we empower individuals to work alongside these systems? Interpretability could be the answer to opening these black boxes.
I do believe the bitter lesson is important to acknowledge along with Amodei’s scaling laws, but I also think we have only scratched the surface of training these models. The data, the real data, is inside companies all over the globe, think about how many petabytes of data sit inside a JP Morgan, Volkswagen Group or an AP Moller, petabytes more than those the current models have been trained on. Why is Palantir so successful? Because it annotates and cleans the data of major industries, this is the secret sauce of AI, or as the OpenAI engineer James Bekter wrote: “The “it” in AI models is the dataset”.
AGI Containment?
Is a Manhattan Project-style approach the right path forward? When it comes to AI and what I view as super-intelligence, I am comfortable with private enterprise driving progress, though I believe it’s crucial to explore ways of distributing the financial benefits more broadly to support societal well-being. However, when it comes to AGI, with its vast commercial and military implications, I do believe it must be carefully controlled and contained.
The reality is that AI cannot be reversed or “boxed in”, its development will advance exponentially by a significant order of magnitude. To prosper in this rapid technological evolution, we must prioritize learning, understanding, collaboration, and resilience as key pillars for moving forward together.
Stay curious
Dr Colin W.P. Lewis