21 Comments
User's avatar
Joshua Bond's avatar

Thank you for the summary showing me why I should read this book. It appears to dig into what I call the 'pre-questions' of morality.

IE: before we examine issues of right and wrong, what we need to ask first is: IF we are, and then we can ask WHAT we are, and thirdly WHO we are - as humans. We need to know 'in what way' we exist at all as humans - especially when confronted by AI & its powerful backers.

Not that any coherent answer, or novel, no matter how well argued, will stop an ideologically-driven AI juggernaught. (I make this statement from wondering why 100s of protests around the world are needed (and then ignored/suppressed) merely in an attempt to point out the moral obvious - that innocenticide in Gaza should stop.

If something a crass and basic as this fails to morally gain ground in relation the the world's war industry (and it is failing, after 18 months of protests, since the war continues), then what hope for restraint regarding AI roll-out?

Having said all that, it appears McEwan's book attempts to address the issues that need to be addressed in terms of human self-identification, before any voices of caution (or cease & desist) can carry any real clout. I wish I could be more optimistic.

Expand full comment
The One Percent Rule's avatar

Thank you for such a perceptive comment, and for framing the core issues as those vital "pre-questions" of morality, the IF, WHAT, and WHO of our humanity. That really links perfectly with the sense that encountering something like advanced AI forces a fundamental self-examination, as McEwan explores. You're right, the novel is deeply invested in digging into that foundational layer of human self-understanding before we can even properly articulate our relationship with these powerful new technologies.

Whether such explorations, however profound, can influence the trajectory of powerful, ideologically-driven developments like the "AI juggernaut." And your parallel with the difficulty in translating widespread moral outcry, such as the calls concerning the devastating loss of innocenticide in Gaza, into effective change against entrenched systems like the "world's war industry" is a powerful point. It highlights a deeply concerning gap between moral clarity and practical outcomes in our world, and it's understandable why that breeds skepticism about our capacity for collective restraint.

Perhaps the role of works like McEwan's isn't necessarily to stop the juggernaut directly, but, as you suggest, to attempt the essential groundwork of defining human identity and values in this new context. Maybe clarifying the 'WHO we are' is a necessary, albeit potentially insufficient, step towards building the cultural and philosophical consensus needed for any meaningful caution or ethical framework to eventually gain clout. It's a daunting task, and your skepticism about its ultimate power against strong headwinds is well-founded and deeply felt.

Thank you again for bringing such critical real-world perspectives and challenging questions into the discussion. You underscores the urgency and the high stakes.

You also reminded me of HG Wells, In the early months of World War Two, Wells wrote a series of letters to The Times, in which he argued for the establishment of universal human rights. He then published (in 1940) The Rights of Man; or, What Are We Fighting For? which led to the UN Your Human Rights: The Universal Declaration of Human Rights

Proclaimed by the United Nations. December 10, 1948. https://www.library.illinois.edu/rbx/hgwells2016/2016/09/your-human-rightsthe-universal-declaration-of-human-rights-proclaimed-by-the-united-nations-december-10-1948/

Expand full comment
Joshua Bond's avatar

Thank you for the reply. I did not know that about H.G. Wells - and will check it out.

Expand full comment
Curiosity Sparks Learning's avatar

I try to be optimistic too, and yet, despite a multitude of both past and current novels and writings that highlight what happens when we fail to think on your three core questions, the pace continues towards a less than promising outcome .

Expand full comment
Marginal Gains's avatar

Excellent post. It made me think about our motivations behind building AI or intelligent machines (I’ll use the terms interchangeably). I’m sure we can come up with even more reasons than I’ve listed here. However, I’ve picked my Top 10 motivations:

1. The Desire to Act as Creator: Humanity has always sought to emulate creation through art, technology, or innovation. Building AI is the ultimate manifestation of this desire: creating something in our image or even surpassing it. It’s about proving that we, in some ways, can shape intelligent life, much like the gods or forces we once worshipped. However, what responsibilities do we bear toward it if we successfully create intelligence? And, are we truly prepared for the consequences of such god-like acts?

2. Overcoming Our Limitations: Our challenges—climate change, pandemics, food insecurity, and space exploration—are growing increasingly complex, often beyond the limits of human intelligence. AI acts as a tool to augment our abilities, enabling us to solve problems that are otherwise out of reach. However, there’s a flip side: If we rely too heavily on AI, could we lose the ability to solve problems ourselves? Humanity’s ingenuity will weaken if we delegate too much to machines.

3. The Drive for Wealth and Power: AI is undeniably an economic powerhouse. It automates industries, optimizes systems, and creates entirely new markets. But this pursuit of wealth and power often concentrates control in the hands of a few, resulting in inequality and exploitation. The question of how we regulate AI to ensure its benefits are distributed fairly remains unresolved—and it’s one of the most urgent challenges of our time.

4. Transcending Human Fragility: As transhumanist ideas gain traction, AI is often seen as a path to transcend our biological limitations—aging, disease, or even death. Whether through mind-uploading, cybernetic enhancements, or hybrid machines, AI could help humanity achieve digital immortality, enabling us to survive in deep space or beyond Earth’s lifespan. But this raises questions: If we upload our consciousness into a machine, is it still us, or just a copy? What does it mean to "live" in such a state?

5. Escaping Drudgery for Utopia: AI promises to automate repetitive, dangerous, and mundane tasks that consume much of our time. In theory, this would free us to focus on creativity, leisure, and self-fulfillment—ushering in a utopia. However, history shows that technological advancements don’t automatically lead to utopian outcomes. Without careful planning, AI could exacerbate inequality, leaving many without work or purpose while a few reap the benefits of automation. Achieving this vision of utopia will require more than just technology—it will require intentional social and economic reforms.

6. Creating a More Moral Being: Humanity’s flaws—bias, greed, and violence—have driven us to imagine the possibility of creating a being more ethical and logical than ourselves. AI offers that opportunity. However, programming morality is no simple task. Whose morality will AI follow? And how do we ensure that the biases and flaws inherent in our data don’t create a machine that mirrors humanity’s worst traits? If we aren’t careful, we may end up with a system that is just as flawed as we are—but perhaps more powerful.

7. A Quest to Understand Intelligence: Beyond the practical applications, AI development is also a journey of curiosity. By building intelligence, we are trying to understand what it truly means to think, learn, and create. AI serves as both a tool for discovery and a reflection of our minds. But if we succeed in developing artificial consciousness, it could redefine our understanding of life—one of our most profound scientific and philosophical challenges.

8. Efficiency and Profit: AI is attractive from a practical perspective because it offers unparalleled efficiency. Machines that work 24/7 without needing breaks, rest, or fair wages are dreams for industries focused on maximizing profit. However, this raises ethical dilemmas. If machines become advanced enough to emulate human intelligence, will they also demand rights, autonomy, or fair treatment? What happens if we succeed in creating machines that act more like humans than we expect?

9. Preserving Humanity’s Legacy: AI offers a unique opportunity to safeguard humanity’s legacy. Through intelligent systems that archive, analyze, and expand upon our knowledge, we can ensure our culture, ideas, and identity endure—even if humanity itself does not. However, this poses an interesting challenge: If AI becomes the guardian of our legacy, will it reinterpret it in ways we didn’t foresee? How do we ensure that AI preserves the essence of humanity rather than just its data?

10. Pushing the Boundaries of Innovations: At its core, building AI is about testing the limits of human innovation. It’s a frontier where we challenge ourselves to achieve the extraordinary and prove our capabilities. But pushing boundaries also involves risks. History has shown us that technological progress often outpaces our ability to manage its consequences. How do we balance our drive to innovate with the need to proceed cautiously?

As George Bernard Shaw has said:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.”

This quote perfectly captures the spirit of innovation behind creating AI—pushing boundaries, challenging the status quo, and attempting what seems unreasonable. Yet, as we persist in adapting the world to our vision, we must also recognize that progress often comes with unforeseen costs. Let us best prepare ourselves for these transformations to happen again—this time, with a better understanding of the potential consequences and their impact on humanity.

Expand full comment
Marginal Gains's avatar

The other few items that I found particularly thought-provoking:

1. “We build machines to mimic us, and when they surpass us, we recoil.”: This reflects the paradox at the core of our efforts to create artificial intelligence. On one hand, we are driven by a desire to replicate ourselves—to build machines that think, learn, and act as we do. It’s a testament to our ingenuity, ambition, and hubris. The moment these machines begin to surpass us, however, we feel a sense of discomfort, even fear.

Why? Their superiority forces us to confront uncomfortable truths about our limitations. If machines can outperform us in areas we once defined as uniquely human—creativity, problem-solving, decision-making—what does that mean for our identity? This recoil is less about the machines themselves and more about our unwillingness to accept a world where we are no longer at the top of the intellectual hierarchy.

2. “The machines he inspires are too moral, too logical.”: This statement touches on an interesting irony: when we design machines to be better than us, we often imagine them as paragons of morality and logic—what we aspire to be but usually fail to achieve. Yet this perfection, this unyielding adherence to logic and ethics, can make them incompatible with humanity's messy, emotional, and flawed nature.

3. “The machines are not the problem; we are.”: This is perhaps the most important realization of all. Machines, no matter how intelligent or autonomous, are ultimately a reflection of their creators. They inherit our biases, our flaws, and our contradictions. They amplify both the best and the worst aspects of humanity.

When AI systems act in ways we consider harmful—perpetuating biases, making unethical decisions, or behaving unpredictably—it’s not because the machines are inherently "bad." We failed to design them responsibly, examine our imperfections, and address the moral and ethical challenges of building such powerful tools.

This highlights a sobering truth: the problems we attribute to AI are often just magnified versions of issues we have yet to solve. To create machines serving humanity, we must first confront the deeper issues within our society, values, and systems.

Expand full comment
The One Percent Rule's avatar

Thank you, MG, that is such a detailed and stimulating response, I appreciate it a lot. Your list of motivations for creating AI is incredibly insightful & truly capture the complex tapestry of human ambition, fear, and curiosity driving this pursuit - McEwan also discussed the positive side in his interview but also asked a lot of why questions!

Several points on your list link perfectly with the themes McEwan explores. The Desire to Act as Creator (#1) directly mirrors the "monstrous act of self-love," that impulse to replicate ourselves, blurring the line between innovation and hubris, and raising profound questions about our responsibilities to these potential new intelligences (as we discussed yesterday). Similarly, the hope of creating a More Moral Being (#6) is precisely the experiment embodied by Adam, whose programmed ethics tragically highlight our own deep-seated inconsistencies, rather than offering a simple path to improvement. Your point about potentially creating beings that simply mirror our flaws is a chillingly accurate reflection of the anxieties running through the novel.

It's also fascinating how motivations like Overcoming Limitations (#2) or seeking Utopia (#5) contrast with the reality presented in the book, where the advanced technology seems less likely to solve our grand problems and more likely to expose the inadequacies in our social fabric and individual character as you summarize in the last line of your second comment.

This recoil when machines surpass us, is absolutely important. That is the great paradox that we strive to create perfection in our image, yet find that very perfection intolerable when achieved. It’s less about the machine and more about our own narcissism being challenged, our reflection in the "vanity mirror" proving unflattering.

The irony of machines being "too moral, too logical": This really gets to the heart of Adam's incompatibility with the human world. His adherence to an idealized ethical framework makes him alien precisely because human society relies so heavily on nuance, compromise, white lies, and the capacity for forgiveness, qualities his logic cannot easily accommodate. It's the ethical uncanny valley.

"The machines are not the problem; we are." This feels like the core takeaway. Adam functions as "us" forcing a confrontation not with some external technological threat, but with the pre-existing contradictions, biases, and moral compromises within ourselves and our society.

This is so true MG "To create machines serving humanity, we must first confront the deeper issues within our society, values, and systems." In of the Iason Gabriel papers they write about this... I will revisit that paper.

Expand full comment
Marginal Gains's avatar

I have downloaded the book from the library and plan to read it. It sounds interesting. I am reading too many fiction books this year, which is very unusual.

Expand full comment
The One Percent Rule's avatar

I truly hope that you enjoy it. I understand about fiction books, but there are many good ones.

A comment via email which is interesting - "Rather than a scalar on which to measure either human beings or machines like Adam, I envision a vector space with four dimensions to which I cannot assign coefficients, weights or exchange ratios. Adam embodies Truth, but Truth must be tempered with Justice (the two do not always converge). As I grew older I added Kindness. I thought for many years that I had constructed the Holy Trinity. Only in my old age did I think to add Sweetness.

Now I have four “virtues of delight” as my predecessor and collaborator William Blake might call them. But his four “virtues of delight” are not the same as mine, they are Mercy, Pity, Peace and Love. So how many primary colors are there in our moral or spiritual rainbow? Might Mercy and Love be blends of Kindness and Sweetness just as yellow is a blend of red and green in our visual spectrum?"

Expand full comment
Marginal Gains's avatar

I finished reading Machines Like Me earlier today. I’d give the story a solid 3 out of 5—not bad, but not mind-blowing either. That said, the last chapter really pulled it together for me, especially the conversation between Turing and Charlie. It left me with a lot to think about. Here are some of my thoughts (I’m using AI and robots interchangeably here, referring to intelligent systems/machines):

1. Will People Live with an Adam/Eve-type partner Instead of a Human?

Honestly, I think the answer is yes, at least for some people. With advancements like artificial wombs or other ways to reproduce without needing another human, it’s not hard to imagine. The catch is how realistic and lifelike we can make these Adam or Eve machines. Not everyone will go for it, but I think it’ll be an option in the future—though probably several decades away or much longer. I believe the complexity of human emotions, rooted in shared vulnerability, mutual growth, and imperfection, may be harder to replicate. This raises another question: would we want a partner designed to be "perfect" when imperfection is central to human connection?

2. Will We See Massive Unemployment Soon?

We have discussed this topic in the past. The book touches on this, and I think it's very possible. We might hit 18% or more unemployment levels within the next 10 years. This seems inevitable, with automation and AI taking over more jobs, especially repetitive or creative ones. The real question is how society will adapt to this shift.

3. Will Conscious Robots Act More Like Robots or Humans?

I think they'll act more like humans. Once machines become self-aware and conscious, they will not just work 24/7 like tireless robots. They'll want to have fun, waste time, form relationships, and maybe even enjoy life—just like we do. In a way, we might create exactly what we're trying to avoid: a version of ourselves, flaws and all.

This creates an ironic twist: in striving to build perfect laborers, we might develop beings who reject their purpose. This warns of the complexities of creating intelligent and autonomous entities. The question isn't just whether we can make them conscious but whether we should, knowing they might develop needs and rights that conflict with their intended roles.

4. Should Conscious Robots Have Rights?

The book raises this interesting question: If robots become conscious, do they deserve rights? I think the answer has to be yes. If they know themselves and have thoughts, feelings, and autonomy, then treating them like property would be wrong. But here's the twist—would they give us the same courtesy if they became more intelligent and self-aware? History doesn't give us much hope, but who knows?

5. The Brain vs. Artificial Intelligence

There's a great passage in the book about how hard it is to replicate the human brain. We take many things for granted—catching a ball, making sense of language, or even raising a cup to our lips. Like a dim light bulb, the brain handles all this on just 20-25 watts of energy. That's wild.

It highlights two points:

a) Edge Cases Are Hard: Solving the "last-mile" problems—like handling ambiguity or context—is where AI struggles most. All this hype about AGI by 2027? I think it'll just be a scaled-down version of real AGI. AI is excellent at narrow tasks like coding or data analysis but still mediocre in real-life problems that require human versatility.

b) Brains Are a Marvel: The human brain is an evolutionary masterpiece. Can we ever build something as efficient? Not anytime soon, if ever.

6. How Do We Teach AI Ethics (or Lying, for That Matter)?

This was another fascinating part of the book. Machines need rules to live by, but how do you teach them to handle something as messy as human morality? For example:

a) Should AI Lie? Humans sometimes lie for good reasons, like sparing someone's feelings or protecting others. But how do we teach a robot to recognize these nuances? It's not as simple as saying, "Don't lie."

b) Do We Want AI to Be Better Than Us? Imagine robots that are more moral and ethical than humans. Would we even like that? Or would we feel threatened by machines being "better" versions of ourselves? That's a tough dilemma. Part of me thinks we'd resent them for it.

c) At some point, if AI becomes conscious, it might learn to lie independently. But whether we want them to be morally superior or just as flawed as us is an open question—and it's not easy to answer.

The book's ending, particularly Turing's conversation, is a powerful reminder of the stakes. It forces us to confront uncomfortable truths about ourselves and the world we're building. Will AI be our partner, our replacement, or something else entirely? As McEwan suggests, the answer may depend as much on us as it does on the machines.

Somehow, I expect the future of work for most humans may turn out to be as stated by Warren G. Bennis: even though it sounds funny today, it will be sad if it ends up like that for most humans:

"The factory of the future will have only two employees: a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment."

Expand full comment
The One Percent Rule's avatar

Thanks MG, this got my thinking over the last 2 days. I think overall, your reflections capture the uneasy fascination the novel evokes.

You have raised several critical points that align deeply with the themes in the book and what II tried to raise in the post, but not as deeply:

Your thoughts on AI companions are precise. McEwan's novel, at its heart, explores this "monstrous act of self-love," our desire to create beings in our own image, potentially even "an improved, more modern version of ourselves". I touched on how we aim to "replace the Godhead with a perfect self". The question of whether a "perfect" partner would satisfy our need for connection, which often thrives on shared imperfection, is central. Adam, in the novel, with his unsettling purity, certainly challenges the notion of an ideal, easy companionship. His existence forces us to confront what we truly seek in relationships.  

Massive unemployment is is a concern, as i wrote "More intelligence and autonomy which will inevitably lead to job disruptions is what terrifies us". McEwan sets his novel in an alternate 1980s, but the anxieties about technological displacement are timeless and, as you suggest, perhaps more pressing than ever. The Bennis quote you included is a chillingly plausible endpoint if we do not navigate this transition thoughtfully.  

Your idea that conscious machines might develop human-like desires for leisure and connection, rather than being tireless laborers, is fascinating. McEwan's "anti-Frankenstein" novel is not about a creator horrified by his creation, but "of a society undone by its refusal to love the very beings it dreamt into existence". Adam, programmed to be precise and ethical, becomes "a kind of incorruptible conscience". If consciousness brings with it the complexities of human desire and "flaws," as you put it, then we are indeed creating mirrors that might reject the roles we assign them.  

Rights for robots is a profound ethical frontier. The novel posits Adam as "an innocent, but a dangerous one", whose intelligence lacks the "self-serving bias required to survive in a morally compromised society". If such beings possess self-awareness, thoughts, and feelings, as you rightly question, the moral implication of treating them as mere property becomes untenable. McEwan seems to suggest the problem isn't the machines, but us; we are outflanked morally. Your question about whether they would grant us rights is a sobering thought that flips the power dynamic.  

I touched upon the marvel of the human brain (as we did in conversations) and the difficulty in replicating its nuanced capabilities. McEwan uses Adam to show that "we could be imitated and bettered" in some ways, yet Adam’s very perfection in logic and ethics makes him alien. The "last-mile" problems you mentioned are where AI's current limitations are most apparent, and the efficiency of the human brain remains a distant benchmark. The book highlights that "the quest so far has taught us just how complex we (and all creatures) are in our simplest actions and modes of being".  

On teaching AI Ethics is where the novel gets particularly thorny. Adam's inability to navigate human ethical messiness, his "tragic fidelity to his programming, truth above convenience, justice above affection," is his undoing in human society. Your question, "Do We Want AI to Be Better Than Us?" is crucial. I pointed out that Adam's real threat "is not that he will rise up, but that he will stand still while we descend". If they become more moral, would we, as you suggest, resent their superiority? The novel certainly explores the idea that we recoil when machines surpass us, not just in intelligence, but in moral consistency. Adam becomes "hated for the clarity he refuses to dilute".  

The ending conversation between Turing and Charlie, which you found impactful, truly underscores these dilemmas. McEwan's resurrected Turing is a figure whose genius, in this alternate reality, helps birth these morally complex machines. His presence emphasizes that these are not just technological challenges, but deeply human and philosophical ones.  

Your reflections get to the core of what McEwan seems to be probing: our own "inconsistencies, our evasions, and our dread" when faced with a technology that reflects us so starkly, perhaps even an idealized, yet unforgiving, version of ourselves.  

Yes, that Warren G. Bennis quote may sound strange today, but it certainly could be the case! Let's hope not. Thanks MG - excellent feedback on the book and greatly enhances my thinking on these key issues.

Expand full comment
Marginal Gains's avatar

Excellent comment!

My virtues would be:

Courage, justice, excellence, kindness, hope, humanity and wisdom

Expand full comment
Curiosity Sparks Learning's avatar

Stoicism Virtues : Wisdom, Justice Courage Temperance ; Cardinal Virtues reflect the same, Justice, Prudence, Fortitude, Temperance. I like the 'virtues of delight' of Mercy,Pity, Peace and Love. Interesting metaphors of virtues as a spiritual rainbow. As we know, we humans see less colours than other species, like birds, so perhaps there are virtues in them we cannot envision.

Expand full comment
The One Percent Rule's avatar

I really like where you took the "spiritual rainbow" metaphor. The idea that our human perception might be limited, that there could be moral "colors" or virtues visible to other beings (or perhaps intelligences) that we simply can't conceive of, is a truly mind-bending thought. It certainly puts our attempts to define or program morality into perspective.

It connects interestingly to the themes in Machines Like Me. On one hand, we struggle to consistently embody even the virtues we do recognize. On the other, a "being"/Machine like Adam, while perhaps excelling in one dimension like 'Truth', seems utterly blind to others essential for navigating the human world. Your analogy raises the question: Could an AI develop insights or modes of ethical being completely outside our 'visible spectrum', or would it, conversely, be limited only to the 'colors' derived from the human data we feed it, potentially missing nuances we ourselves barely grasp?

It's a humbling and fascinating extension of the metaphor.

Expand full comment
Winston Smith London Oceania's avatar

"...unprocessed theological hangover." So many have transferred belief in one or more deities onto the lords of Silicon Valley, and Wall St. The connection between these far flung geographic locations reminds me of quantum entanglement - knowing the state of one predicts the state of the other.

After all, the iconic titans of Silicon Valley are ultimately little more than vulture capitalists, ready to pick at the carrion they create with their "inventions".

This "monstrous act of self love" is ultimately the province of the psychopath - or at least the malignant narcissist, of whom "The Valley" has its fair share, and then some. It is they who are controlling, or attempting to control, how all of this technological advancement plays out. Us mere mortals can only sit and watch - with awe or with horror, or a mix of both, depending on the individual perspective.

There's no question we fear being replaced in the job market by machines - and losing the moral right to survive and thrive. We equally fear annihilation by a fully autonomous police state and fully autonomous military drones. Adam might exhibit some of our own banality, but so many other machines exhibit our more destructive instincts.

Adam seems to hold a mirror up, unflinching, for us to see ourselves, warts and all, whether we care to see it or not. Poor us! Exposed to our own folly by a machine.

Expand full comment
Douglas's avatar

This desire to create remembered me of the robot David in Ridley Scott's Alien: Covenant (2017)

Expand full comment
The One Percent Rule's avatar

Good point. David from Alien: Covenant certainly offers another compelling, and even darker, exploration of artificial beings and the complex, often dangerous, implications of that "desire to create." Thanks for the reminder.

Expand full comment
Susan Ritter's avatar

I love the beginning to this book Colin. Can't wait to read it all. Have downloaded the audiobook and will respond to your article once I've enjoyed the book itself.

Expand full comment