Discussion about this post

User's avatar
Joshua Bond's avatar

Thank you for the summary showing me why I should read this book. It appears to dig into what I call the 'pre-questions' of morality.

IE: before we examine issues of right and wrong, what we need to ask first is: IF we are, and then we can ask WHAT we are, and thirdly WHO we are - as humans. We need to know 'in what way' we exist at all as humans - especially when confronted by AI & its powerful backers.

Not that any coherent answer, or novel, no matter how well argued, will stop an ideologically-driven AI juggernaught. (I make this statement from wondering why 100s of protests around the world are needed (and then ignored/suppressed) merely in an attempt to point out the moral obvious - that innocenticide in Gaza should stop.

If something a crass and basic as this fails to morally gain ground in relation the the world's war industry (and it is failing, after 18 months of protests, since the war continues), then what hope for restraint regarding AI roll-out?

Having said all that, it appears McEwan's book attempts to address the issues that need to be addressed in terms of human self-identification, before any voices of caution (or cease & desist) can carry any real clout. I wish I could be more optimistic.

Expand full comment
Marginal Gains's avatar

Excellent post. It made me think about our motivations behind building AI or intelligent machines (I’ll use the terms interchangeably). I’m sure we can come up with even more reasons than I’ve listed here. However, I’ve picked my Top 10 motivations:

1. The Desire to Act as Creator: Humanity has always sought to emulate creation through art, technology, or innovation. Building AI is the ultimate manifestation of this desire: creating something in our image or even surpassing it. It’s about proving that we, in some ways, can shape intelligent life, much like the gods or forces we once worshipped. However, what responsibilities do we bear toward it if we successfully create intelligence? And, are we truly prepared for the consequences of such god-like acts?

2. Overcoming Our Limitations: Our challenges—climate change, pandemics, food insecurity, and space exploration—are growing increasingly complex, often beyond the limits of human intelligence. AI acts as a tool to augment our abilities, enabling us to solve problems that are otherwise out of reach. However, there’s a flip side: If we rely too heavily on AI, could we lose the ability to solve problems ourselves? Humanity’s ingenuity will weaken if we delegate too much to machines.

3. The Drive for Wealth and Power: AI is undeniably an economic powerhouse. It automates industries, optimizes systems, and creates entirely new markets. But this pursuit of wealth and power often concentrates control in the hands of a few, resulting in inequality and exploitation. The question of how we regulate AI to ensure its benefits are distributed fairly remains unresolved—and it’s one of the most urgent challenges of our time.

4. Transcending Human Fragility: As transhumanist ideas gain traction, AI is often seen as a path to transcend our biological limitations—aging, disease, or even death. Whether through mind-uploading, cybernetic enhancements, or hybrid machines, AI could help humanity achieve digital immortality, enabling us to survive in deep space or beyond Earth’s lifespan. But this raises questions: If we upload our consciousness into a machine, is it still us, or just a copy? What does it mean to "live" in such a state?

5. Escaping Drudgery for Utopia: AI promises to automate repetitive, dangerous, and mundane tasks that consume much of our time. In theory, this would free us to focus on creativity, leisure, and self-fulfillment—ushering in a utopia. However, history shows that technological advancements don’t automatically lead to utopian outcomes. Without careful planning, AI could exacerbate inequality, leaving many without work or purpose while a few reap the benefits of automation. Achieving this vision of utopia will require more than just technology—it will require intentional social and economic reforms.

6. Creating a More Moral Being: Humanity’s flaws—bias, greed, and violence—have driven us to imagine the possibility of creating a being more ethical and logical than ourselves. AI offers that opportunity. However, programming morality is no simple task. Whose morality will AI follow? And how do we ensure that the biases and flaws inherent in our data don’t create a machine that mirrors humanity’s worst traits? If we aren’t careful, we may end up with a system that is just as flawed as we are—but perhaps more powerful.

7. A Quest to Understand Intelligence: Beyond the practical applications, AI development is also a journey of curiosity. By building intelligence, we are trying to understand what it truly means to think, learn, and create. AI serves as both a tool for discovery and a reflection of our minds. But if we succeed in developing artificial consciousness, it could redefine our understanding of life—one of our most profound scientific and philosophical challenges.

8. Efficiency and Profit: AI is attractive from a practical perspective because it offers unparalleled efficiency. Machines that work 24/7 without needing breaks, rest, or fair wages are dreams for industries focused on maximizing profit. However, this raises ethical dilemmas. If machines become advanced enough to emulate human intelligence, will they also demand rights, autonomy, or fair treatment? What happens if we succeed in creating machines that act more like humans than we expect?

9. Preserving Humanity’s Legacy: AI offers a unique opportunity to safeguard humanity’s legacy. Through intelligent systems that archive, analyze, and expand upon our knowledge, we can ensure our culture, ideas, and identity endure—even if humanity itself does not. However, this poses an interesting challenge: If AI becomes the guardian of our legacy, will it reinterpret it in ways we didn’t foresee? How do we ensure that AI preserves the essence of humanity rather than just its data?

10. Pushing the Boundaries of Innovations: At its core, building AI is about testing the limits of human innovation. It’s a frontier where we challenge ourselves to achieve the extraordinary and prove our capabilities. But pushing boundaries also involves risks. History has shown us that technological progress often outpaces our ability to manage its consequences. How do we balance our drive to innovate with the need to proceed cautiously?

As George Bernard Shaw has said:

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.”

This quote perfectly captures the spirit of innovation behind creating AI—pushing boundaries, challenging the status quo, and attempting what seems unreasonable. Yet, as we persist in adapting the world to our vision, we must also recognize that progress often comes with unforeseen costs. Let us best prepare ourselves for these transformations to happen again—this time, with a better understanding of the potential consequences and their impact on humanity.

Expand full comment
19 more comments...

No posts