“Do you think I am an automaton?—a machine without feelings? and can bear to have my morsel of bread snatched from my lips, and my drop of living water dashed from my cup? Do you think, because I am poor, obscure, plain, and little, I am soulless and heartless? You think wrong!…And if God had gifted me with some beauty and much wealth, I should have made it as hard for you to leave me, as it is now for me to leave you… and we stood at God’s feet, equal—as we are!”~ Charlotte Brontë, Jane Eyre [1847]
I'm very much in favor of AI and believe its benefits can be significant. I've personally built and implemented AI solutions with positive impacts for users, such as an AI-powered voice control system enabling blind and partially blind individuals to interact with their banking apps. However, history has taught me that unchecked technological optimism can be detrimental. Therefore, I strive to balance my enthusiasm by acknowledging the potential pitfalls of AI if left unregulated.
I also strive to help people understand what AI is and can be, not the AI of folk stories, or existential threat, but tools which can enable individuals to creatively explore novel ideas, and by doing this, help people to consider the values inherent in AI technology, and critically evaluate the long-term effects this technology may have on the world.
One striking critical evaluation that my co-author and I discovered when researching the “what and why” of artificial intelligence and surveyed over three hundred AI professors, scientists and researchers, is they are playing God. To add to that reflection, watch this short (less than 4 minutes) video of Anthropic's interpretability team, builders of one of the most advanced AI systems. “We don't understand these systems we have built,” “we don't build neural networks, we grow them." "This thing that we grew, rather than designed from scratch,” “it's a lot like evolution."
It is somewhat appropriate then that three heavyweights of diplomacy and technology, Henry Kissinger, Eric Schmidt, and Craig Mundie should collaborative on a book entitled Genesis. Artificial Intelligence, Hope, and the Human Spirit. Their book is nothing short of a balance between a proclamation of optimism, and a forewarning, articulating both the awe and the possibilities with the downsides embedded in humanity’s latest attempt to transcend itself. Here, nestled within prose that oscillates between dense historical insight and breathless urgency, lies a blueprint for a future few of us seem ready to face. Reading Genesis I was struck by its central paradox: the intelligence we nurture to expand our understanding of reality might compel us to redefine the narratives that anchor human identity. This is not a matter of alarmism, a term too easily invoked to dismiss serious questions, but rather a sober acknowledgment of the transformative potential of AI.
Niall Ferguson’s introduction underscores this balance between incredible progress and caution, situating the book within the grand sweep of history. These shifts didn't just amplify human abilities, they reconfigured the power structures, decision-making processes, and even the societal frameworks in which control was exercised. Ferguson reminds us that Kissinger, Schmidt, and Mundie are not merely theorizing about AI but drawing from decades of grappling with technological upheavals and their impact on humanity. Their perspective, grounded in history, diplomacy, and innovation, challenges us to ask not just whether AI will amplify our control over the world but whether it might redefine the very nature of control. This is the heart of the book’s inquiry, an intricate examination of how progress reshapes the frameworks we take for granted, compelling us to engage with uncertainty.
New Discoveries
The authors argue that polymaths are exceptional for their ability to master multiple fields of knowledge, often producing revolutionary ideas that redefine entire domains of study. They cite historical examples, such as figures from the Islamic Golden Age and the Enlightenment, who bridged disciplines to create a unified understanding of complex phenomena. Polymaths like Ibn al-Haytham and Muhammad ibn Musa al-Khwarizmi combined the rigors of science with the aspirations of religion, while Enlightenment thinkers pursued intellectual freedom that enabled them to investigate knowledge as an end in itself. These individuals exemplify the unique human capacity to synthesize diverse strands of thought into cohesive and innovative frameworks.
Furthermore, the authors acknowledge the societal and institutional supports that allowed polymaths to thrive during different eras. They highlight how intellectual progress was often enabled by collaboration and competition among peers, fostering an environment where interdisciplinary breakthroughs could flourish. For example, during the Enlightenment, intellectual hubs in Europe facilitated cross-pollination of ideas, laying the groundwork for collective advancements in science and technology.
AI: The Ultimate Polymath
Building on the achievements of human polymaths, the authors portray AI as the next phase in intellectual exploration, which they term a "general super polymath," a synthetic entity capable of integrating and advancing knowledge across disciplines, especially scientific fields, at unprecedented speeds. AI, they argue, transcends human limitations such as finite time, cognitive biases, and physical constraints. Unlike humans, AI can process vast quantities of information simultaneously, identifying patterns and connections that would elude even the most capable human thinkers.
The authors describe AI’s potential to unify disparate fields of knowledge into a "unity of understanding," enabling breakthroughs in areas ranging from genetics to cosmology. They highlight its capacity to advance theoretical physics, particularly in reconciling the discrepancies between quantum mechanics and general relativity, a task that has long confounded human researchers. By leveraging machine learning and computational power, AI could serve as a tool for uncovering hidden relationships and generating innovative solutions, in term solving major global issues such as environmental issues, cancer and other major societal threats.
The authors are optimistic about the potential of AI to enhance human well-being and equity. They envision AI being used to create a "new baseline of human wealth and well-being," alleviating labor burdens and mitigating conflicts rooted in socioeconomic inequalities. For example, AI’s applications in medicine, such as predicting protein structures, diagnosing diseases, and personalizing treatments, are already demonstrating its potential to revolutionize healthcare and extend human lifespans.
AI’s benefits, however, extend beyond practical applications. It also offers the possibility of redefining the very nature of intellectual discovery. Through its ability to iterate quickly, adapt, and test hypotheses at a scale unimaginable for humans, AI can catalyze a "third age of discovery" that reconfigures our understanding of reality. This transformative potential, the authors argue, will likely position AI as a co-equal, if not dominant, partner in shaping the future of science and society.
Human Cognition
The authors do not dwell on the history of AI, so as a reminder, the “intelligence” of AI systems, as defined by Roger Schank, is “communication, world knowledge, internal knowledge, intentionality, and creativity, emphasizing that the ability to learn is the most critical component of intelligence.” Rod Brooks further adds the dimension of “completely autonomous mobile agents that are capable of perceiving, acting, and pursuing a set of goals in a dynamic environment.” Which provides grist to the mill when Mundie says AI is a new species.
It is important we fully understand these goals of AI development. Then we will realize why the authors’ framing of AI as a new frontier is not a casual metaphor. They liken today’s developments to humanity’s great voyages of discovery. Whether it’s Magellan’s circumnavigation of the globe or Shackleton’s haunted footsteps across Antarctic wastelands, the book invites us to see AI not as a tool but as a terrain, an uncharted realm as perilous as it is promising.
They indicate that AI’s capacity to probe the limits of human knowledge is similar to the spirit of earlier explorers, except that AI does so without fear, fatigue, or the fragile mortality of its makers. But where Shackleton risked frostbite and mutiny, the stakes in AI’s terrain are much, much higher, a journey not across oceans but into the nature of cognition.
What distinguishes this frontier, however, is its potential for catastrophic misalignment, or losing our ability to think, as I have written about many times, which is also nicely captured by Jacob Harland:
"Technology poses a more insidious threat to the development of future Mandelstams than ideology. ... Nearly four in 10 students admit to using programs such as ChatGPT to write their papers ... As this technology advances, few will be able to resist the temptation to outsource the greatest part of their thinking to machines."
Kissinger, true to his realist roots, reminds us that technological revolutions, from the advent of nuclear weapons to the digital age, have always been double-edged swords. This is the book’s intellectual sharpness, it refuses to indulge in the simple binary of utopia versus dystopia. Instead, the authors describe a spectrum of possible futures, many of which are fraught with danger. One cannot help but shiver at the six scenarios the authors sketch for the deployment of superintelligent AI. From hegemonic domination by a single entity to decentralized anarchy driven by rogue actors. These visions align with the foreboding of Prometheus unchained, and with the quiet certainty that hubris always demands a reckoning. They outline this because they want society and politicians to take note, they want controls and safety nets put in place.
In my view, the most chilling question raised in Genesis:
What if AI’s methods of discovery disrupt the very foundations of human cognition?
By processing information at inhuman speeds and generating solutions to problems we cannot even comprehend, AI risks catalyzing a “dark enlightenment”, a regression to a state where humans, stripped of their epistemological primacy, are forced to accept conclusions we neither understand nor control. "What can this mean?" Kissinger asks with the gravitas of a scholar peering into the abyss. "Will the age of AI propel humanity forward, or will it undermine our claim to a unique grasp of reality?" This paradox, of creation undermining creator, haunts the text like a Frankenstein refrain, daring us to confront our own limitations. As I have written about, what will happen if we give up our thinking to AI?
Homo Technicus
In their historical analysis, the authors shine. They draw elegant parallels between the dawn of AI and the advent of nuclear weapons, a technology Kissinger first wrestled with in his formative years. Just as nuclear power demanded a doctrine of deterrence, so too does AI require an overarching strategy of governance. But herein lies the rub, unlike nuclear arsenals, whose destructive potential can be mutually assured, AI evolves. Its boundaries are neither fixed nor easily regulated, making the "alignment problem" the task of ensuring AI’s goals remain congruent with humanity’s, infinitely more complex. The Manhattan Project gave us fission, but who will govern AI’s continuous fission of knowledge?
The text is not without its optimism, tempered though it is by Kissinger’s characteristic sobriety. Schmidt and Mundie, as technologists, envision the emergence of Homo technicus, a symbiotic species co-evolving with machine intelligence. This co-evolution suggests a profound redefinition of human identity and values. As we integrate AI into every facet of life, we may gain unprecedented insights into our biology, psychology, and society. The benefits are staggering. Disease eradication, equitable access to knowledge, and solutions to environmental crises.
Yet, they also caution, will humanity, in its pursuit of symbiosis, lose its autonomy? Might the boundaries of ethical action blur as we increasingly rely on machine judgment? Perhaps the most unsettling question is not whether we can coexist with machines but whether, in doing so, we might cease to coexist with ourselves.
Moreover, the ethical considerations surrounding AI are vast and pressing. The authors outline the usual concern of AI’s potential to amplify biases embedded in its training data, perpetuating or even exacerbating societal inequities. They highlight the dangers of delegating decision-making to systems that lack moral intuitions. Addressing these issues requires robust frameworks for transparency and accountability. This includes ensuring diversity in AI development teams, instituting rigorous audits of algorithms, and embedding ethical guidelines into the fabric of AI design.
Speed and Scale
Perhaps the most poignant moments of Genesis arise in its reflections on time. AI, the authors argue, compresses human decision-making to an unprecedented degree. "Objects in the future," they write, "are closer than they appear." This temporal acceleration is not merely a technical challenge, it is, as I have written many times, a societal one. The authors warn us that as AIs multiply in speed and complexity, the human capacity for deliberation, a cornerstone of democracy, diplomacy, and morality, risks becoming obsolete. Hence, they argue, we must resist the temptation to delegate our choices entirely to machines. For even as AI surpasses us in logic, it should not be allowed to replicate the ineffable qualities of human judgment, empathy, intuition, and an awareness of our shared vulnerabilities.
The book captivates readers by celebrating the historical contributions of polymaths and argues that their intellectual legacy is now poised to be amplified by AI. They highlight the profound potential of AI to enhance scientific discovery, alleviate human suffering, and foster societal equity. At the same time, they urge caution and proactive governance to ensure that AI’s immense capabilities are aligned with human values. They foresee a future where the integration of human ingenuity and artificial intelligence can redefine the boundaries of what is possible, benefiting science and humanity in profound ways.
Public Action
In its closing chapters, Genesis offers a blueprint for how humanity might manage this epochal transition. It calls for an unprecedented alignment of technical, political, and ethical frameworks. The authors invoke the need for AI arms control agreements akin to those forged in the nuclear age, alongside the development of global institutions capable of monitoring and mediating AI’s effects. Strengthening this call to action, the authors emphasize the role of individual citizens.
We must demand greater transparency from corporations, advocate for ethical oversight in AI development, and engage in public discourse to shape policy. This is why I advocate a grassroots movement, one that will challenge those corporations building AI to disclose their intenal reports on AI’s societal impacts. We also need coalitions to lobby for equitable access to AI’s benefits. If history has taught us anything, it is that collective human agency, the unrelenting insistence of ordinary people, can tilt the axis of power.
There are broad challenges Genesis compels us to face. The moral arc of progress may bend toward justice, but only if pulled by a global collective. And as former OpenAI director of safety, Miles Brundage, cautioned:
“AI companies have little interest in preparing society, at the speed/scale that's needed, since they are busy trying to beat each other and navigate a complex political environment. Journalists, academics, and civil society need to fill the gap.”
Note also in this video with Sam Altman, CEO of OpenAI, who touches on “what it takes for AI to go well for the world,” and one of the co-authors of Genesis, Craig Mundie, who says we need:
“…to create the governance around it (AI). And that's not happening in my mind, merely at the pace that's required. And so the book is a bit of a an urgent call for people (the developers) to step back.”
Likewise, Dario Amodei, CEO of Anthropic has repeatedly called for policy Action on AI. Meanwhile, Chinese Vice-Premier Ding Xuexiang, pleading for responsible global governance of AI, called the competition between world powers over AI a “grey rhino”, a high probability, high impact event.
Tension
Genesis is not hype, above all, it is a meditation on human agency. It does not offer easy answers because there are none. Instead, it invites us to wrestle with the tension between our boundless aspirations and our inherent limitations. I am reminded of a line from Robert Browning: “Ah, but a man’s reach should exceed his grasp, or what’s a heaven for?” In reaching for the heavens through AI, we risk losing our grasp on the very essence of what makes us human. But perhaps, in the end, it is this tension, this fragile, beautiful paradox, that defines us.
Overall I think Genesis is a good read for the general public, plus it should be required reading on various education programmes and discussed in corporations and read by policy makers. Furthermore, it is a timely reminder for AI research labs, although it gets scant attention from the AI developer community.
Yet, to read it is to confront not only the future but also the past. It is to recognize that our technological odyssey, for all its novelty, is but the latest chapter in humanity’s eternal struggle to reconcile knowledge with wisdom. And in that struggle lies our most profound hope, and as the authors posit, one of our most important challenges.
Stay curious
Colin
Image - Sam Altman, CEO of OpenAI from this interview with Craig Mundie
I suppose I never thought of this way, but indeed we could view AI as the ultimate polymath. Bestowed with virtually all of human knowledge and immense compute, it can “unify” otherwise siloed disciplines.
I imagine that because of this, even before AI can truly “innovate,” it could possibly discover innovations by recombining existing human knowledge in ways that we cannot do currently.
I suppose the line between “recombining” knowledge and ideas and “innovating” is fairly blurry…so perhaps this is one in the same.
Alas we are looking at the magicians apprentice. We never see the magician. And the profile is always the same, selected for their big eyed innocent look and yet responsible for major effects. Mark Zuckerberg, thought Facebook was just a family photo album, never thought he was involved in manipulating national election results around the world. Sam Altman, the same, little geek person, and a little flower on his table beside a book titled 'Genesis', really? No one could believe its the work of his hands. Let's get real here. Smoking mirrors inside smoking mirrors.