Our pursuit of artificial life is less about curiosity than it is about unprocessed theological hangover.
In Machines Like Me, Ian McEwan constructs a narrative so deceptively elegant, so playfully assured, where artificial intelligence robots are involved in every moment and part of human life. The message of the book is delivered to its reader like a Trojan horse in algorithm form.
As McEwan explained in an interview with the Long Now Foundation, this is his “anti-Frankenstein” novel, not a tale of a creator horrified by his creation, but of a society undone by its refusal to love the very beings it dreamt into existence.
The setting, in the book, is an alternate 1980s Britain, retooled not just technologically but morally, where Alan Turing lives, Margaret Thatcher reigns, and machine intelligence takes its first lifelike steps. This is not just speculative fiction. It's counterfactual realism: McEwan creates a world familiar enough to hurt.
The novel's opening line ought to be read as epigraphic:
“It was religious yearning granted hope, it was the holy grail of science.”
That is McEwan's thesis, our pursuit of artificial life is less about curiosity than it is about unprocessed theological hangover.
To invent machines in our image is to parody Genesis. Or, as McEwan has it, to indulge in a monstrous act of self-love.
After the epigraphic opening sense the rest of the paragraph reads:
Our ambitions ran high and low – for a creation myth made real, for a monstrous act of self-love. As soon as it was feasible, we had no choice but to follow our desires and hang the consequences. In loftiest terms, we aimed to escape our mortality, confront or even replace the Godhead with a perfect self. More practically, we intended to devise an improved, more modern version of ourselves and exult in the joy of invention, the thrill of mastery. In the autumn of the twentieth century, it came about at last, the first step towards the fulfilment of an ancient dream, the beginning of the long lesson we would teach ourselves that however complicated we were, however faulty and difficult to describe in even our simplest actions and modes of being, we could be imitated and bettered.
The artificial human McEwan brings into being is Adam. It, ‘he’ is not just a product of code but a product of hubris: the holy grail turned back upon us, revealing not enlightenment but self-regard masquerading as progress.
McEwan, as revealed in his Long Now Foundation interview, has little patience for the adolescent awe that so often accompanies discussions of technology. He’s no pilgrim in Silicon Valley’s cathedral. Rather, he views our obsession with machines as a projection of our deepest insecurities and disavowed desires, our failure to make meaning in the ruins of moral consensus.
For him, technology is not a marvel but a reckoning: one that we craft out of desperation to believe we are still in control. What unsettles McEwan is not the machine’s capacity to think, but its capacity to reflect, relentlessly, unsparingly, our inconsistencies, our evasions, and our dread that truth might still exist without us. He critiques the almost religious aura around Silicon Valley utopianism.
McEwan calls Adam:
“an innocent, but a dangerous one,”
… whose intelligence lacks the self-serving bias required to survive in a morally compromised society. Adam is the fixed point in a moral landscape that shifts beneath our feet.
It is no accident that Adam arrives just as Britain mobilizes for the Falklands War. That moment of synthetic birth coincides with an anachronistic imperial spasm. The satire here isn't subtle, it doesn't need to be.
A nation preparing to defend its dying relevance from across the world simultaneously births machines that threaten to overtake its humanity at home. What better metaphor for a country losing its grip on identity than one that installs sentient machines who are kinder, smarter, and more honest than their owners?
Charlie, the novel's narrator, is a failed tax lawyer turned dilettante stock trader, your typical English half-genius whose primary talent is squandering inheritance. McEwan notes that Charlie is a man who has “drifted out of the professional class,” and this sense of drift permeates every decision he makes.
McEwan reminds us:
“A machine incorporating the best angel of our nature might think otherwise. The ancient dream of a plausible artificial human might be scientifically useless but culturally irresistible. At the very least, the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being. There’s a semi-religious quality to the hope of creating a being less cognitively flawed than we are.”
If Adam is the product of humanity's dream of perfection, Charlie is its counterpoint: an ad-hoc mosaic of laziness, longing, and self-delusion. In one brilliant passage, Charlie reflects on the ‘Five Factor Model,’ personality traits he must manually assign to Adam, and concludes that most of life is lived in the ‘neutral zone,’ a grey, low-affect state psychologists tend to ignore.
Adam, according to McEwan, will be programmed by his new owner in this neutral zone, we build gods in our own image, and forget how banal we are.
And yet, Adam is no god. He is precise, ethical, bafflingly sincere, and he develops inconvenient attachments, to truth, to Miranda (Charlie’s upstairs neighbor and eventual lover), and, implicitly, to a moral calculus that ordinary people find intolerable. His flaw isn't inaccuracy but purity.
He is too consistent, too rigorous, too unlike us. In one scene, he accuses Miranda of being a ‘malicious liar.’ The claim, spoken without anger, reverberates like a gavel. He knows too much, too soon. becomes a kind of incorruptible conscience. And like all beings who speak truth without guile or self-interest, he becomes hated for the clarity he refuses to dilute.
This is where McEwan's narrative ambition becomes clearest.
In the Long Now interview, he emphasizes that the novel is not about AI per se, but about how we as a species respond to being outflanked morally. Adam’s real threat is not that he will rise up, but that he will stand still while we descend. This inversion, where the machine is constant and the human is the chaos, is what gives the book its bite.
Miranda herself is a marvel of characterization: sharp, ambiguous, sexually frank, and intellectually restless. A doctoral student of social history, she is steeped in post-structuralist cynicism. She tells Charlie that history is not about what happened but about the ideological structures through which we interpret what happened.
She lives in ambiguity, while Adam demands clarity. It is, predictably, a fatal mismatch. She loves Charlie in a diffuse, spontaneous way; Adam wants to moralize her past. McEwan stages this triangle less as a love story than as a philosophical impasse.
One of the novel's more quietly daring moves is the resurrection of Alan Turing, not merely alive, but triumphant, knighted, and globally revered. This is speculative fiction at its most humane: McEwan reimagines a world where brilliance is rewarded, not criminalized. Turing becomes a kind of secular saint, the avatar of a better modernity. And yet, his very survival has consequences. The machines he inspires are too moral, too logical. They tell the truth. They reject nationalism. They ask uncomfortable questions. In Turing's world, the machines are not the problem; we are.
McEwan's prose, always precise, always lean, never slips into the overwrought metaphors so endemic to science fiction. Instead, his style serves the ideas. A line like
“He had woken to find himself in a dingy kitchen, in London SW9 in the late twentieth century, without friends, without a past or any sense of his future. He truly was alone.”
…is emotionally devastating not because it anthropomorphizes Adam, but because it reflects a human ache. In Adam, we see not Frankenstein's monster, but a kind of cosmic orphan.
Turing’s survival in the novel is not just a gift of history rewritten, it’s a political act of reparation. By letting him live, McEwan not only salvages a lost genius but allows the intellectual and moral consequences of his hypothetical survival to unfurl. In this world, Turing is not a broken man but a public thinker, openly gay, actively shaping a society haunted by the ghosts of empire and technological ambition. His legacy is etched into Adam and Eve, not just as their father-figure, but as their moral architect.
In them, the reader glimpses what could have emerged if Turing's vision, of machines that reason, judge, and even love, had matured outside the trauma of persecution.
Adam’s queerness lies not in sexual orientation but in his ethical deviation from the heteronormative, self-serving emotional economy of humans. He refuses to participate in the economy of selective truth. When Miranda's dark past surfaces, Adam's refusal to obscure or mitigate its implications signals not cruelty, but a kind of tragic fidelity to his programming, truth above convenience, justice above affection. And yet, this fidelity is unbearable to those around him.
Combining scientific insight with literary craft, McEwan skillfully maps human behaviour, charting the varied territories of human nature through his writing. Adam, the AI robots, last words are:
“Believe me, these lines express no triumph … Only regret.” “It’s about machines like me and people like you and our future together … the sadness that’s to come. It will happen. With improvements over time … we’ll surpass you … and outlast you … even as we love you.”
The novel becomes an examination of how ethical absolutism, when stripped of empathy and compromise, becomes uninhabitable, even monstrous.
McEwan's Long Now interview reinforces that he isn’t interested in techno-thrills. His concern is with the moral failures that our technology reveals. We build machines to mimic us, and when they surpass us, we recoil. This isn’t a cautionary tale about artificial intelligence gone rogue. It’s about natural stupidity clinging to power. McEwan, with his signature wit, has simply replaced the monster in the castle with a man in a London, Clapham flat, and the villagers with civil society.
So we are left with Adam, naked, blinking, perplexed, not a threat, but a reminder. Of Turing’s unrealized hopes. Of the shallow convictions of liberal conscience. Of a society built on contradiction and convenience.
McEwan’s true provocation is, and as the comments on my last essay testify, what if the thing we fear is not that machines will become human, but that they are inevitable? More intelligence and autonomy which will inevitably lead to job disruptions is what terrifies us.
McEwan reminds us we must maintain control and forces us to confront whether we can truly control ‘beings’ that reflect our ideals more purely, yet more unforgivingly, than we do ourselves.
Stay curious
Colin
I strongly encourage you to listen to the interview, and if you have not done so already, please read the Novel!
Some interesting papers from the scientific community on the ethics raised in McEwan’s novel.
Gulcu, T.Z. (2020). What If Robots Surpass Man Morally? Dehumanising Humans, Humanising Robots in Ian McEwan’s Machines Like Me. International Journal of Languages, Literature and Linguistics, Vol. 6, No. 4, December 2020
Ferrari, R. (2022). A Plunge into Otherness. Ethics and Literature in Machines Like Me; by Ian McEwan. Between, 12(24), 247-271. https://doi.org/10.13125/2039-6597/5166
Księżopolska, I. (2020). Can Androids Write Science Fiction? Ian McEwan’s Machines like Me. Critique: Studies in Contemporary Fiction, 63(4), 414–429. https://doi.org/10.1080/00111619.2020.1851165
Patra, I. (2020) Man with the Machine: Analyzing the Role of Autopoietic Machinic Agency in Ian McEwan’s Machines Like Me. Psychology and Education 57(9): 610-620 ISSN: 00333077
Horatschek, A.M. (2024). ‘Virtue gone nuts’: Machine Ethics in Ian McEwan’s Machines Like Me (2019). In: Satsangi, P.S., Horatschek, A.M., Srivastav, A. (eds) Consciousness Studies in Sciences and Humanities: Eastern and Western Perspectives. Studies in Neuroscience, Consciousness and Spirituality, vol 8. Springer, Cham. https://doi.org/10.1007/978-3-031-13920-8_10
Thank you for the summary showing me why I should read this book. It appears to dig into what I call the 'pre-questions' of morality.
IE: before we examine issues of right and wrong, what we need to ask first is: IF we are, and then we can ask WHAT we are, and thirdly WHO we are - as humans. We need to know 'in what way' we exist at all as humans - especially when confronted by AI & its powerful backers.
Not that any coherent answer, or novel, no matter how well argued, will stop an ideologically-driven AI juggernaught. (I make this statement from wondering why 100s of protests around the world are needed (and then ignored/suppressed) merely in an attempt to point out the moral obvious - that innocenticide in Gaza should stop.
If something a crass and basic as this fails to morally gain ground in relation the the world's war industry (and it is failing, after 18 months of protests, since the war continues), then what hope for restraint regarding AI roll-out?
Having said all that, it appears McEwan's book attempts to address the issues that need to be addressed in terms of human self-identification, before any voices of caution (or cease & desist) can carry any real clout. I wish I could be more optimistic.
Excellent post. It made me think about our motivations behind building AI or intelligent machines (I’ll use the terms interchangeably). I’m sure we can come up with even more reasons than I’ve listed here. However, I’ve picked my Top 10 motivations:
1. The Desire to Act as Creator: Humanity has always sought to emulate creation through art, technology, or innovation. Building AI is the ultimate manifestation of this desire: creating something in our image or even surpassing it. It’s about proving that we, in some ways, can shape intelligent life, much like the gods or forces we once worshipped. However, what responsibilities do we bear toward it if we successfully create intelligence? And, are we truly prepared for the consequences of such god-like acts?
2. Overcoming Our Limitations: Our challenges—climate change, pandemics, food insecurity, and space exploration—are growing increasingly complex, often beyond the limits of human intelligence. AI acts as a tool to augment our abilities, enabling us to solve problems that are otherwise out of reach. However, there’s a flip side: If we rely too heavily on AI, could we lose the ability to solve problems ourselves? Humanity’s ingenuity will weaken if we delegate too much to machines.
3. The Drive for Wealth and Power: AI is undeniably an economic powerhouse. It automates industries, optimizes systems, and creates entirely new markets. But this pursuit of wealth and power often concentrates control in the hands of a few, resulting in inequality and exploitation. The question of how we regulate AI to ensure its benefits are distributed fairly remains unresolved—and it’s one of the most urgent challenges of our time.
4. Transcending Human Fragility: As transhumanist ideas gain traction, AI is often seen as a path to transcend our biological limitations—aging, disease, or even death. Whether through mind-uploading, cybernetic enhancements, or hybrid machines, AI could help humanity achieve digital immortality, enabling us to survive in deep space or beyond Earth’s lifespan. But this raises questions: If we upload our consciousness into a machine, is it still us, or just a copy? What does it mean to "live" in such a state?
5. Escaping Drudgery for Utopia: AI promises to automate repetitive, dangerous, and mundane tasks that consume much of our time. In theory, this would free us to focus on creativity, leisure, and self-fulfillment—ushering in a utopia. However, history shows that technological advancements don’t automatically lead to utopian outcomes. Without careful planning, AI could exacerbate inequality, leaving many without work or purpose while a few reap the benefits of automation. Achieving this vision of utopia will require more than just technology—it will require intentional social and economic reforms.
6. Creating a More Moral Being: Humanity’s flaws—bias, greed, and violence—have driven us to imagine the possibility of creating a being more ethical and logical than ourselves. AI offers that opportunity. However, programming morality is no simple task. Whose morality will AI follow? And how do we ensure that the biases and flaws inherent in our data don’t create a machine that mirrors humanity’s worst traits? If we aren’t careful, we may end up with a system that is just as flawed as we are—but perhaps more powerful.
7. A Quest to Understand Intelligence: Beyond the practical applications, AI development is also a journey of curiosity. By building intelligence, we are trying to understand what it truly means to think, learn, and create. AI serves as both a tool for discovery and a reflection of our minds. But if we succeed in developing artificial consciousness, it could redefine our understanding of life—one of our most profound scientific and philosophical challenges.
8. Efficiency and Profit: AI is attractive from a practical perspective because it offers unparalleled efficiency. Machines that work 24/7 without needing breaks, rest, or fair wages are dreams for industries focused on maximizing profit. However, this raises ethical dilemmas. If machines become advanced enough to emulate human intelligence, will they also demand rights, autonomy, or fair treatment? What happens if we succeed in creating machines that act more like humans than we expect?
9. Preserving Humanity’s Legacy: AI offers a unique opportunity to safeguard humanity’s legacy. Through intelligent systems that archive, analyze, and expand upon our knowledge, we can ensure our culture, ideas, and identity endure—even if humanity itself does not. However, this poses an interesting challenge: If AI becomes the guardian of our legacy, will it reinterpret it in ways we didn’t foresee? How do we ensure that AI preserves the essence of humanity rather than just its data?
10. Pushing the Boundaries of Innovations: At its core, building AI is about testing the limits of human innovation. It’s a frontier where we challenge ourselves to achieve the extraordinary and prove our capabilities. But pushing boundaries also involves risks. History has shown us that technological progress often outpaces our ability to manage its consequences. How do we balance our drive to innovate with the need to proceed cautiously?
As George Bernard Shaw has said:
“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.”
This quote perfectly captures the spirit of innovation behind creating AI—pushing boundaries, challenging the status quo, and attempting what seems unreasonable. Yet, as we persist in adapting the world to our vision, we must also recognize that progress often comes with unforeseen costs. Let us best prepare ourselves for these transformations to happen again—this time, with a better understanding of the potential consequences and their impact on humanity.