AGI - We Need Philosophy
Demis Hassabis and the crisis of pursuing optimization without purpose
It is a call that Demis Hassabis, the co-founder of DeepMind, has issued not once, but repeatedly. In interviews and public appearances, he has consistently called for philosophers, economists, and social scientists to grapple with the consequences of artificial intelligence. Yet, in the relentless roar of AI boosterism, this appeal goes largely unexamined.
The warning is not subtle. In one interview, he explicitly states, “We need some great philosophers or social scientists to be involved”... In another, he mentions encouraging ‘top economists in the world and philosophers to start thinking’ about how society is going to be affected by this [AI] and what should we do.
Coming from the Nobel Laureate and architect of AlphaFold, the remark was not just provocative; it was a sharp critique of the prevailing assumptions underpinning our technological development, an appeal not to acceleration, but to introspection, to a tradition too often dismissed as ornamental in the age of metrics and momentum.
Here is a man who had helped redraw the boundary between human and machine, now suggesting that the deeper crisis lies not in our capacity to compute, but in our failure to think, seriously, rigorously, and ethically, about what such capacity ought to serve. Not code, but character. Not better machines, but wiser humans.
Hassabis wasn’t being whimsical. He was issuing a warning that the dominant operating system of our civilization, technical rationality, turbocharged by machine intelligence, is not enough. Or more precisely, it is too much, when unaccompanied by a corresponding upgrade in moral and civic software. “We need philosophers,” he said, and what he meant was: we need meaning.
The implicit wager of AGI, which he predicts by 2030, is that intelligence, abstracted, accelerated, and externalized, can solve any problem. Disease, poverty, climate, war. Feed it enough data, add enough layers, and the machine will converge on an answer. But what if the question itself is wrong? What if the problem isn’t scarcity of means but poverty of ends?
We are now entering what I would call the Age of Infinite Means. This is the era where the constraints on doing have all but collapsed, and the constraints on deciding what to do become existential. As I see it, to use an analogy from Hassabis the chess prodigy, the chessboard is open, the processor is primed, but nobody agrees on a strategy. Or worse, everyone agrees: maximize engagement (as social media algorithms often do by promoting outrage), dominate the market, optimize the KPI (as in the generative AI content farms churning out synthetic text to game search engines). In other words, pursue optimization in a system whose original objectives have been buried beneath quarterly incentives.
Hassabis is not alone in sensing the disquiet. But unlike many in his field, he is willing to make it explicit, even if only obliquely. His remarks function less as a formal philosophical position and more as a provocation, a cue for broader reflection, not a full diagnosis. That intelligence without wisdom is just entropy with good PR. That an AGI, given a blank ethical check, might do what corporations already do: automate mediocrity at scale.
Philosophy is not an elective. It is not leisure for the undistracted. It is the infrastructure of human self-understanding. The machine learns from data, but the human must still learn how to live. Or at the very least, it defines the horizon of possibility. Because what good is an oracle that can answer any question if we have forgotten how to ask a good one?
The risk of AGI is not just that it may do harm, but that it may do exactly what we ask, without ever pausing to wonder if we should have asked it.
This is not theoretical. It is already happening. Hassabis and his peers are building tools capable of running the world, while the world itself is gripped by a moral anemia. The paradox is simple: we are getting smarter and dumber at the same time. Smarter machines. Dumber politics. Smarter diagnostics. Dumber public health. We know how to cure diseases, but not how to care for the lonely. We can simulate protein folding, but not community.
In that light, Hassabis’s call for philosophers begins to look less like an eccentric footnote and more like a sensible invocation. We don’t just need better models. We need to revisit first principles. What is a good life? What is a just society? What kind of human do we want to become when we are no longer limited by human frailty?
This is the real frontier, not artificial intelligence, but the conditions under which intelligence, artificial or not, serves something higher than itself. Not the automation of cognition, but the cultivation of judgment. And that cannot be built in code alone. It requires a society willing to reintroduce the forgotten disciplines of reflection, restraint, and responsibility.
In the next five to ten years, AGI will likely cross thresholds we once thought sacrosanct. It can already read and write, solve and simulate, forecast and fabricate. It can pass bar exams and medical boards. It will negotiate treaties and design cities. And still, it will not tell us why to have a child, or how to grieve a parent, or when to forgive a betrayal. It will not, because it cannot. Those are not tasks. They are trials.
And so we arrive at the edge of something old disguised as something new. The problem of meaning. It is not a bug of civilization, it is its first feature. We outsourced memory to books. We outsourced strength to machines. Now we are poised to outsource thinking. What remains? Only judgment. Only values. Only the fragile, fallible process of asking, not what can we do, but what should we become.
We should study philosophy. That is what Hassabis was saying, I think, beneath the clamor. That building AGI is not the end of history, but the start of a harder question. One that cannot be answered by scale, or speed, or search. One that demands something even rarer than intelligence.
Wisdom.
Stay curious
Colin
Best thoughts on AI progress, to put the “why” with the “where we can go”.
Excellent post!
Reflecting on the ongoing progress in LLM-based AI—though admittedly less spectacular in the last six months—I am struck by a sense of wonder and unease. While these advances hold extraordinary promise, I still question whether this trajectory alone will lead us to AGI, for reasons I’ve outlined in previous comments on your posts. Even if a real AGI remains decades or centuries away, the questions you’ve raised here feel increasingly urgent.
What happens to the meaning and purpose of life when so much of it has been tied to our work? If work no longer defines us, what does it even mean to be human?
What should we study when education is no longer tethered to careers? How do we gain wisdom in a world increasingly shaped by artificial systems, when so much of our wisdom has historically come from struggle, experience, and engagement with the unpredictable, tangible world?
This brings me to a troubling thought: Will curiosity and creativity—qualities we’ve long cherished—become endangered? If necessity and problem-solving have driven much of our curiosity and creativity, could they wither in a world where abundance reigns and every challenge is met with a machine-generated solution? Could we inhabit a world of unimaginable plenty and yet feel profoundly lonely?
But perhaps the most unsettling question of all is this: In our pursuit of AGI, are we not just building tools to enhance our lives but designing our replacement? As biologically fragile beings, unable to endure the extremes of other planets or stars, are we, through AI and genetic modification, creating a new version of humanity—one engineered to thrive where we cannot?
And if so, what becomes of us, the creators of something destined to surpass and outlast us? Will we eventually worship a completely artificial or hybrid being, as we have done throughout history when faced with forces we could not understand? Why would we worship such a being? Would it be out of awe, dependence, or fear? Or would it reflect a more profound existential crisis—our inability to define meaning and purpose in a world where intelligence and creativity are no longer uniquely human?
Being at the top of the Earth's species pyramid with intelligence to thrive on Earth today, how would we react to something that moves us down a rung in the future and takes over our place? Or would we become gods by creating something more intelligent than ourselves, which will outlive our species and propagate to other stars and galaxies until the end of time?
These questions are not new, but they feel more pressing than ever. They demand intelligence and wisdom—a quality no machine can replicate and one we must actively cultivate within ourselves. As Yuval Noah Harari profoundly observed:
"The real question facing us is not what we want to become, but what we want to want."