18 Comments
User's avatar
Norman Sandridge, Ph.D.'s avatar

Great post (again), Colin. Something I would wonder about a benevolent AI is how much its benevolence would be “philanthropic” (loving of humanity) or more directed toward animals and life in general. Would it aspire to create a more balanced ecosystem, as some humans have sought to do (and which might necessitate a reallocation of resources that may disadvantage humans) or would it make humans the priority and expend all resources for their benefit, as some religious mythologies prescribe? What kind of divinity would it be?

Also, at this point I’m not sure whom I would trust in this world to managed a super-AI responsibly.

Expand full comment
The One Percent Rule's avatar

Thank you Norman. The scope of a benevolent AI's benevolence is a difficult question. I completely agree that it's not a given that it would prioritize humanity. It's entirely possible, even likely, that a super-AI would recognize the interconnectedness of all life and prioritize the health of the entire ecosystem, even if that meant some level of sacrifice or resource reallocation for humans. It really makes you wonder how we would react to that kind of 'benevolence.' ... of course it also brings up the very complex question of how we define benevolence?

The question of what kind of 'divinity' it would be is equally thought provoking, which I have not given any thought to in the terms that you state. We would need to attribute all religions we associate with divinity and whether an AI could embody them. And if it did, would we even recognize it? Could we even understand its values?

The question of trust is a big one. It's a concern I share deeply. You are right to raise this. Who, in our current world, could we truly trust to manage a super-AI responsibly? It's one of the most pressing questions we face in this whole debate. It is a very difficult question to answer.

Expand full comment
Norman Sandridge, Ph.D.'s avatar

“It really makes you wonder how we would react to that kind of 'benevolence.'”: presumably a super-AI would also anticipate this reaction, perhaps better than we could, and prepare accordingly, whatever that means? This makes me think of humanity’s varied reactions to the arrival of the Trisolarans in The Three Body Problem, if you are familiar with that. Some people want to defend against them, some want them to become our masters, and some want them to wipe out humanity altogether, in hopes that they will bring balance to the planet.

Expand full comment
The One Percent Rule's avatar

Yikes, I have still not read Three Body Problem, I picked it up recently at a bookstore and put it back down. But you are the third person to mention it in the last 2 weeks, so I will order it and read it.

Agree - an ASI would be able to anticipate everything and prepare accordingly.

Expand full comment
Norman Sandridge, Ph.D.'s avatar

Three Body is amazing and wonderfully mind-bending, but it doesn’t really anticipate AI. I recommend the book before the Netflix series (if you plan to watch it).

Expand full comment
Gavin J. Chalcraft's avatar

I would be very wary of a benevolent artificial intelligence for the reasons I outlined in this short article. We have to look behind these systems at who is creating them and question the motives behind both their ambitions and faux warnings.

https://open.substack.com/pub/gavinchalcraft/p/ai-if-this-is-not-troubling?r=s3qz0&utm_medium=ios

Expand full comment
Joshua Bond's avatar

Exactly. And when we find intentions of the few have nothing to do with the 'common good' of the many - what options are open? Thoughtful scientists? Concerned technologists? On-our-side politicians? Religious authorities?

We-the-people have to do something en masses ourselves. But even millions on the streets seems to have no effect. The majority failed the Covid-Vaxx test of intelligence - has the lesson been sufficiently learned to lead to meaningful mass action in relation to AI? And what might such action look like?

Expand full comment
Gavin J. Chalcraft's avatar

Humans have the capacity for Self-Realization (if they so choose) and if that power is given up to an artificial intelligence then Self-Realization will no longer be available. That connection will be lost and will need to be retaught as the ancient Rishi's had done thousands of years ago. I keep saying this over and over: AI is a kill switch on the human imagination, self determination and Self-Realization. Mass action can work in the form of protests, although they seem to be losing their effectiveness. As Self-Realization and revelation are individual in their inner experience, then it must start with the individual. Amongst many other things, I am an artist. I intend to die with a paintbrush in my hand, no matter how much AI takes over, not so much in protest, but because that is who I am, and I am who I am.

Expand full comment
Joshua Bond's avatar

I agree. I gave up academia to make things with my hands. It keeps me sane. AI can mimic aspects of being human but it's not, and can never be, conscious.

Expand full comment
Gavin J. Chalcraft's avatar

As always an insightful and thought-provoking article, Colin. I reached my Bingo moment when you wrote: Yet, if history is a guide, it is not intelligence alone that shapes destiny, but wisdom, wisdom not only in how we develop AI but in how we define and instill ethical reasoning, foresight, and self-restraint into machines.

Here is my thought on the same subject. There are basically one and the same: https://open.substack.com/pub/gavinchalcraft/p/the-intelligence-problem?r=s3qz0&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Neil Anand's avatar

🚨 When AI meets law enforcement, justice takes a backseat. 🤖💰 Predictive policing isn’t about safety—it’s a high-tech hustle, draining wealth from communities. The surveillance state isn’t watching crime—it’s manufacturing it. 🕵️‍♂️⚖️

https://doctorsofcourage.org/the-predictive-policing-racket/https://doctorsofcourage.org/the-predictive-policing-racket/

Expand full comment
The One Percent Rule's avatar

Agree wholeheartedly

Expand full comment
Curiosity Sparks Learning's avatar

Colin, thank you for expressing so well my own deep concerns about AI. I am a frequent user and teacher of AI. I've not yet had time to read this book. How would you say if differs from Genesis, which most outlined questions, or SuperAgency, which is 99% optimistic?I found Nick Bostrom puzzling, where his recent book (2024) outlines his optimism, whereas SuperIntelligence 10 years early expressed deeper concerns. What happened to make him change his perspective?

My concerns revolves around those who heavily promote Singularity, like P. Diamandis( just to pick one) , and most of his podcast guests, who are excited about 'the end of the human epoch as we define it,' and celebrate the movement to transhumanism, an evolution of the human. His Super Abundance event this week is, and I quote him directly here:

SuperAbundant Summit will focus on converging exponential technologies creating billion-dollar opportunities. Convergence that is revolutionizing industries, longevity, and how we live, work, and invest. While others might see chaos in this accelerating technological wave, we see opportunity. While they may fear disruption, the Abundance360 community embraces it as the catalyst for billion-dollar breakthroughs.

Even though he’ll raise a few ethical, and moral concerns, and perhaps concerns you've laid out in this article, they are mere tokens, as all the focus ignores the intermediate negative outcomes on most humans.

I couldn’t agree with you more that the issue is about power, but of course, it is always about power and control, just like every battle and war has always been. I'm uncertain if the "the pen is mightier than in the sword' will apply in this battle. We who engage in cognitive battle hold butter knives against those holding multiple machine guns.

Expand full comment
The One Percent Rule's avatar

Thank you. Regarding the book comparisons, Karp and Zamiska's Technological Republic feels far more grounded in the realities of power and geopolitics than, say, SuperAgency, which, as you rightly point out, leans heavily into optimism. While Genesis certainly raised many important questions, Karp and Zamiska's work feels more urgent and focused on the immediate risks of unchecked AI development. Although they emphasize that the US must win the race, full speed ahead... which contradict my own concerns about the speed at which we are developing these tools. I do think we need to find a way to better control the output and 'who controls them.'

Nick Bostrom is perplexing indeed, I was at a conference with him for his book launch and could not get a straight answer for the change. It is possible that new research has given him more hope. It is very hard to keep up with the field right now.

I completely agree with your concerns about figures like Diamandis and the uncritical embrace of the singularity and transhumanism. The focus on 'billion-dollar opportunities' and the 'end of the human epoch as we define it' is deeply troubling. The ethical considerations, as you said, often feel like mere tokens, while the potential for widespread negative consequences is ignored.

The worry, as you say, is being fundamentally about power and control and you are right It has always been this way. The disparity in resources and influence between those advocating for caution and those pushing for rapid advancement is significant. Many of us are trying to have a conversation and agree a path forward, while others are trying to create a revolution and outright ownership. We have lots to think about.

Expand full comment
Curiosity Sparks Learning's avatar

Thank you Colin for the detailed reply. Unfortunately, i doubt either of us have much control over our concerns about the speed. I will purchase Technological Republic as I gleam from your comments, it is a valuable read. What I'd like to do is input them into Claude, and delve into a deep analysis of each, and where interplay and possible intersections occur, to create better discussions. Or, perhaps, are you doing this already, or have a student able to do this ?

As for Bostrom, I believe that he had research that gave him more hope, he'd have shared that in his book, or if not, definitely with you in person.

Two yeas ago, one participant at Diamandis' event commented, "The smell of greed permeates us here." Yet, those present are the ones with potential power and control over the future of AI; they are the ones who can be at the forefront of not only raising the ethical concerns, but of making them matter before millions of people's lives are shattered .

I like these words: Many of us are trying to have a conversation and agree a path forward, while others are trying to create a revolution and outright ownership. We do indeed have lots to think about, but much less time than we'd want to do so. The key question is : how do those of us more inclined to conversation and cognitive engagement enter the sphere of those inclined to power, money and control? I wonder if your students, whose lives will be impacted more dramatically than probably yours or mine, would be willing to engage deeply with this question? Could they, and would they, engage with it on the urgency level it requires as the rapid advancement only accelerates each and every week. I believe it's possible the outcomes of their deep engagement could hold avenues of opportunities that impact our mutual concerns.

Expand full comment
The One Percent Rule's avatar

I completely agree that we have limited control over the speed of AI development, which only underscores the urgency of our concerns. Have you seen the Pause AI work? I know they were giving out flyers in san Francisco this weekend.

I think your plan to use Claude for a deep analysis of the books we mentioned is fantastic! I'd be very interested to see what insights you uncover. Please, feel free to share any findings please. I'm not doing this, but saw Karpathy asking for ideas on X. Here it is https://x.com/karpathy/status/1866896395363553418

I'm always looking for ways to engage with these questions in a more structured way and involve students, but we have nothing concrete on the books, we do with academic and AI research papers. I would be very interested in seeing what you find.

Your comment about the 'smell of greed' at Diamandis' event really resonates. It's a reminder of the challenge we face in ensuring that ethical considerations take precedence over profit and power. It is a very concerning issue.

That is a crucial issue. How do those of us focused on conversation and cognitive engagement enter the sphere of those driven by power and control? It's a question that I'm constantly grappling with. I think student involvement is absolutely essential. Their generation will be most impacted, and they bring a fresh perspective and energy to the table. I believe that they are the best chance we have.

Even at the EU working group we struggle with this, and have high level researchers like Yoshua Bengio leading one of our tracts!

I'm writing an essay based on the Threshold 2030 report, which is seriously alarming, from an economics perspective, inequality and control. I will publish it tomorrow.

Expand full comment
Curiosity Sparks Learning's avatar

Now I'm looking forward to tomorrow 's article. ;) I was pondering our concerns in line with your article on Ockman's razor. I'm wondering if perhaps we are failing to apply this principle to this issue? Also, students engaged with academia and research papers will tend to complexity, especially in writing their own papers, to 'impress' you. So, how can that be combated? Also, I'm wondering what their assumptions are regarding AI , as you are teaching an introductory class. I have found with my students too often they are unaware of mental models to employ, principles like Ockman's Razor, or even foundational knowledge I'd expect them to have in order to think and to reason the issues. Long time concern of mine.

Thanks for X link. I use Notebook LM extensively, it is helpful. As well, as I know you read Michael Simmons' posts, he extends the capabilities of these tools for his paid subscribers. I prefer using Claude for engagement when seeking in-depth conversations; it tends to highlight what I am failing to see myself. It thus becomes a Human+ AI conversation, and, thereby do I live the paradox of our lives - use the AI, and yet, concern never leaves. Thank you again for the detailed reply.

Expand full comment
Max Kern's avatar

I have no answers but I do have a question here, you wrote: " I am concerned enough to continue to lend my experience with these AI systems to ensure they are aligned with the values of a ‘free’ society." How can anyone "ensure" anything when it comes to the development of AI and that "they"are aligned with the values of a free society. I am not critical here, I just dont understand, because how can we assured that the AI systems of China, Russia or Hungary will be designed to be aligned with these values. Perhaps I am naive, perhaps I simply dont understand the point you are making, or something else. But that sentence of your triggered me since I could not make sense of it. But that's on me!

Expand full comment