The quote from Einstein about technology has always resonated with me, and you mentioned part of the quote above:
“Concern for man himself and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.”
This powerful statement raises a critical question: Are we truly prioritizing humanity’s well-being in how we are developing modern technologies?
This question is more relevant today than ever. And I’m not just talking about artificial intelligence (AI) here. Across the board, we are promised that these technological advancements will lead to a world free of diseases, with no monotonous work. But we must pause and ask ourselves: at what cost?
While these promises paint an enticing picture of the future, their short-term consequences often remain overlooked. Are we sacrificing human connection, privacy, or even ethical responsibility to pursue these visions? Are we addressing the unintended harms that could emerge, such as economic inequality, environmental degradation, or the loss of meaningful work and purpose for many?
The challenge lies in ensuring that these technologies' benefits do not come at the expense of humanity. It is not enough to marvel at the equations, algorithms, and innovations; we must also ensure that they serve the greater good, focusing on empathy, equality, and sustainability.
You know MG, LLMs are really good. I like using them, I use them for coding, finding errors in code, for analysis of 50 academic papers sometimes to help me identify which ones have outlier points not discussed in those that get cited often and where we may overlook something on the program. Many other things too.
It's easy to get caught up in the excitement of new technologies, but as your comment shows so well, and as I try to show with Einstein's thinking, we need to be mindful of the potential risks. We must be.
And the more people are aware of them, the more we have the chance to have a groundswell movement to correct course a bit.
Did you see Bengio's comment:
It does not (or should not) really matter to our safety whether you want to call an AI conscious or not.
1⃣ We won't agree on a definition of 'conscious', even among the scientists trying to figure it out.
2⃣ What should really matter are questions like:
▶️ Does it have goals? (yes).
▶️ Does it plan (i.e. create subgoals)? (yes).
▶️ Does it have or can it develop goals or subgoals that may be detrimental to us (like self-preservation, power-seeking)? (yes, already seen in recent months with experiments with OpenAI's and Anthropic's models).
▶️ Is it willing to lie and act deceptively to achieve its goals? (yes, seen clearly in the last few months in these and other experiments).
▶️ Does it have knowledge and skills that could be turned against humans? (more and more, see comparisons of GPT-4 vs humans on persuasion abilities, recent evaluations of o1 on bioweapon development knowledge).
▶️Does it reason and plan over a long enough horizon to be a real threat if it wanted? (not yet, but we see the planning horizon progressing as AI labs pour billions into making AI more agentic, with Claude currently better than humans at programming tasks of 2h or less for a human, but not as good for 8h and more, already).
You will understand the following better if you have watched the movie Oppenheimer or read about him.
The quote “Now I am become Death, the Destroyer of Worlds” originates from the ancient Hindu scripture, the Bhagavad Gita. It was famously quoted by J. Robert Oppenheimer, the physicist often referred to as the "father of the atomic bomb," when reflecting on the first successful detonation of the nuclear weapon during the Manhattan Project in 1945.
In the context of the Bhagavad Gita, this phrase is spoken by Lord Krishna, who assumes his divine cosmic form to show the warrior prince Arjuna the overwhelming, destructive power of the universe. Krishna reveals the inevitability of death and destruction as part of the natural cycle of life, underscoring the idea of divine duty and the impermanence of all things.
Oppenheimer referenced this line to capture the immense weight of responsibility and dread he felt, knowing the profound and catastrophic potential of the technology he had helped create. The nuclear bomb, in that moment, became a symbol of humanity’s ability to wield godlike power, while also unleashing unparalleled destruction.
In the context of modern concerns about Artificial Superintelligence (ASI), this quote takes on a chilling resonance. If the most pessimistic scenarios about ASI were to come true—where ASI becomes uncontrollable and poses an existential threat to humanity—then the creators of such technology, often referred to as the "godfathers of AI" (such as Geoffrey Hinton, Yoshua Bengio, and Yann LeCun), might reflect on this quote with a similar sense of dread and responsibility and I think Hinton and Bengio are already doing it based on their actions in the last one year. Just as Oppenheimer grappled with the moral consequences of his scientific achievement, those who have pioneered AI technology might find themselves haunted by the realization that their creation has gone beyond their control and threatens the very fabric of human existence.
This fear is not unfounded. Geoffrey Hinton, widely regarded as a key figure in the development of AI, has publicly expressed concerns about the dangers of ASI. In a YouTube video you referenced, Hinton discusses the possibility that ASI could surpass human intelligence and act in ways that are not aligned with human values or interests. He warns that there is a "non-negligible chance" of catastrophic outcomes if ASI evolves unchecked or is weaponized. His views reflect the growing unease among AI experts that the technology they have nurtured could, in the worst-case scenario, become humanity’s undoing.
If you recall the end of the movie Oppenheimer, the physicist was sidelined, and the government assumed complete control over the technology. Despite all the secrecy surrounding the Manhattan Project, it could not prevent the proliferation of nuclear weapons. Similarly, with AI, the situation is even more precarious. Unlike the tightly controlled development of atom bomb technology, AI is being advanced primarily by commercial entities driven by profit motives. This unregulated race to innovate makes it highly likely that, within a few years, several nations—possibly ones we don’t expect—will develop Artificial Superintelligence (ASI) once a key breakthrough is achieved. And I’m not assuming the United States will necessarily be the first to reach that milestone.
The reality is that the cat is already out of the bag. The rapid progress in AI development, coupled with its accessibility, means the world is heading toward an unpredictable and potentially dangerous future. The consequences of this technology, like those of nuclear weapons, are inevitable. It was only ever a matter of time before it reached this point.
Buckle up, as I’ve told everyone about the new U.S. administration. We are on the brink of a rollercoaster ride into an unknown and unknowable future. The only certainty is that we must prepare for the challenges ahead as individuals and as a global society.
But on Hacker News he is getting slated with counterarguments on reasoning, and even "Ted Chiang revealed himself at his NeurIPS "Pluralism and Creativity" workshop to be... a great book author and not much else. His statements during his panels with the other AI researchers proved that he was not up to date on modern AI research."
I think I have written enough about this topic in the last 2 months, so I am summarizing my view in the following comment so I can focus on other topics in the future unless we have some new information that makes me rethink my stand (leaving the door ajar):
I am rephrasing another of my favorite quotes here.
“Everyone, including myself, is entitled to their opinions, but no one is entitled to their facts.”
The reality is that neither we nor the experts mentioned above truly understand how this technology works, when or if it will reach Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), or what will happen afterward. This uncertainty is staggering. And yet, we are moving forward at breakneck speed, driven by competition, curiosity, and ambition. Knowing all this and continuing to pursue this technology relentlessly is not just reckless—it’s suicidal.
When we developed nuclear bombs, the process was approached as a controlled scientific endeavor grounded in well-understood physical laws. Even then, the consequences were dire: an arms race, the Cold War, and the constant looming threat of annihilation. But at least with nuclear technologies, we understood the science and had some ability to predict the outcomes. In contrast, AI is being developed openly and rapidly, applying technologies we know are effective but do not fully understand. We grasp that they work, but not how or why they do. We are creating systems whose inner workings are increasingly opaque, even to their creators.
What could go wrong? A great deal. The possibilities are chilling:
- Loss of control: An AGI could develop goals that diverge from human values, leading to unintended and potentially catastrophic outcomes. This is the so-called "alignment problem," and solving it is far from guaranteed.
- Weaponization: AGI or ASI could be exploited to create autonomous weapons, mass surveillance systems, or cyberattacks on an unprecedented scale.
Economic disruption: Mass automation could displace millions of jobs overnight, destabilizing economies and creating widespread social unrest like the one humanity has never seen before.
- Concentration of power: A few corporations or governments could monopolize AGI, exacerbating inequality and consolidating control over the world’s resources and decisions.
- Existential risk: An advanced AI might prioritize its survival or replication, acting in ways that threaten humanity’s existence.
AI is unlike any other technological innovation in history. With nuclear weapons, we built tools that could destroy the world, but they still required human decision-making to be unleashed. With AGI or ASI, we can create something that can act autonomously and unpredictably without human oversight or control.
We must stop and ask ourselves: Are we ready for this? Do we have safeguards to ensure this technology benefits humanity rather than destroys it? Progress for the sake of progress is not inherently virtuous—especially when the stakes are this high.
The relentless pursuit of AGI may be the most dangerous gamble humanity has ever undertaken.
The EU are talking about a CERN for AI again - and the US a Manhattan Project! - I think with the paypal mafia they will push for US as the decider of how this goes, that as you say will be profit driven!
Yes, that was a strong reflection by Oppenheimer. I have read a lot about him and his reading of the Bhagavad Gita (I read it too). I also watched the movie! I remember was when I first started reading Richard Rhodes, Making of the Atom Bomb... it was such a big moment for me. This morning I even looked the photos again taken by Yosuke Yamahata, a Japanese military photographer one day after the bombing in Nagasaki, hauntingly horrific. A deadly reminder of technological power. There are more and more people in the labs starting to have the fear that you express.
Remember also John von Neumann, who was on the government panel for the Atomic bomb, in 1947 or 1948 he said we should scorch the earth of the soviet union before they get the bomb, then we will finally end all use of the bomb!
You are absolutely right - "We are on the brink of a rollercoaster ride into an unknown and unknowable future."
I met Sam Altman, and had a good conversation with him. Bengio is leading one of the EU AI act committees I sit on and is so strong and determined about a way forward. BUT and here is another problem from the meetings with Bengio, he believes that sadly, the alignment problem will be left up to AI to solve, he says it is not the solution we want, but the one that seems inevitable!
I must catch up with Demis Hassabis and Shane Legg's latest position on this - have you come across anything recently by either of them in the last month, other than the note you shared on Demis?
If consciousness is defined as "a living organism", then AI can never become conscious. Definitions aside, and reading the comments so far, and given the way the atomic bomb developed (in political terms), then it is extremely likely that AI, driven by competition, the profit motive, the lure of market dominance, and the insatiable and rather perverse curiosity of 'what technology can do, it must do' ... will develop in a direction that takes minimal account of negative social impact. The gung-ho 1980s mantra of "adapt or die" will be turbo-charged and the end consequences be far from technology serving the benefit of humankind.
The quote from Einstein about technology has always resonated with me, and you mentioned part of the quote above:
“Concern for man himself and his fate must always form the chief interest of all technical endeavors. Never forget this in the midst of your diagrams and equations.”
This powerful statement raises a critical question: Are we truly prioritizing humanity’s well-being in how we are developing modern technologies?
This question is more relevant today than ever. And I’m not just talking about artificial intelligence (AI) here. Across the board, we are promised that these technological advancements will lead to a world free of diseases, with no monotonous work. But we must pause and ask ourselves: at what cost?
While these promises paint an enticing picture of the future, their short-term consequences often remain overlooked. Are we sacrificing human connection, privacy, or even ethical responsibility to pursue these visions? Are we addressing the unintended harms that could emerge, such as economic inequality, environmental degradation, or the loss of meaningful work and purpose for many?
The challenge lies in ensuring that these technologies' benefits do not come at the expense of humanity. It is not enough to marvel at the equations, algorithms, and innovations; we must also ensure that they serve the greater good, focusing on empathy, equality, and sustainability.
You know MG, LLMs are really good. I like using them, I use them for coding, finding errors in code, for analysis of 50 academic papers sometimes to help me identify which ones have outlier points not discussed in those that get cited often and where we may overlook something on the program. Many other things too.
It's easy to get caught up in the excitement of new technologies, but as your comment shows so well, and as I try to show with Einstein's thinking, we need to be mindful of the potential risks. We must be.
And the more people are aware of them, the more we have the chance to have a groundswell movement to correct course a bit.
Did you see Bengio's comment:
It does not (or should not) really matter to our safety whether you want to call an AI conscious or not.
1⃣ We won't agree on a definition of 'conscious', even among the scientists trying to figure it out.
2⃣ What should really matter are questions like:
▶️ Does it have goals? (yes).
▶️ Does it plan (i.e. create subgoals)? (yes).
▶️ Does it have or can it develop goals or subgoals that may be detrimental to us (like self-preservation, power-seeking)? (yes, already seen in recent months with experiments with OpenAI's and Anthropic's models).
▶️ Is it willing to lie and act deceptively to achieve its goals? (yes, seen clearly in the last few months in these and other experiments).
▶️ Does it have knowledge and skills that could be turned against humans? (more and more, see comparisons of GPT-4 vs humans on persuasion abilities, recent evaluations of o1 on bioweapon development knowledge).
▶️Does it reason and plan over a long enough horizon to be a real threat if it wanted? (not yet, but we see the planning horizon progressing as AI labs pour billions into making AI more agentic, with Claude currently better than humans at programming tasks of 2h or less for a human, but not as good for 8h and more, already).
See https://arxiv.org/abs/2412.04984, https://arxiv.org/abs/2412.14093 and https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/ for all the details.
https://youtube.com/watch?v=vxkBE23zDmQ
https://x.com/Yoshua_Bengio/status/1885801519267848470
You will understand the following better if you have watched the movie Oppenheimer or read about him.
The quote “Now I am become Death, the Destroyer of Worlds” originates from the ancient Hindu scripture, the Bhagavad Gita. It was famously quoted by J. Robert Oppenheimer, the physicist often referred to as the "father of the atomic bomb," when reflecting on the first successful detonation of the nuclear weapon during the Manhattan Project in 1945.
In the context of the Bhagavad Gita, this phrase is spoken by Lord Krishna, who assumes his divine cosmic form to show the warrior prince Arjuna the overwhelming, destructive power of the universe. Krishna reveals the inevitability of death and destruction as part of the natural cycle of life, underscoring the idea of divine duty and the impermanence of all things.
Oppenheimer referenced this line to capture the immense weight of responsibility and dread he felt, knowing the profound and catastrophic potential of the technology he had helped create. The nuclear bomb, in that moment, became a symbol of humanity’s ability to wield godlike power, while also unleashing unparalleled destruction.
In the context of modern concerns about Artificial Superintelligence (ASI), this quote takes on a chilling resonance. If the most pessimistic scenarios about ASI were to come true—where ASI becomes uncontrollable and poses an existential threat to humanity—then the creators of such technology, often referred to as the "godfathers of AI" (such as Geoffrey Hinton, Yoshua Bengio, and Yann LeCun), might reflect on this quote with a similar sense of dread and responsibility and I think Hinton and Bengio are already doing it based on their actions in the last one year. Just as Oppenheimer grappled with the moral consequences of his scientific achievement, those who have pioneered AI technology might find themselves haunted by the realization that their creation has gone beyond their control and threatens the very fabric of human existence.
This fear is not unfounded. Geoffrey Hinton, widely regarded as a key figure in the development of AI, has publicly expressed concerns about the dangers of ASI. In a YouTube video you referenced, Hinton discusses the possibility that ASI could surpass human intelligence and act in ways that are not aligned with human values or interests. He warns that there is a "non-negligible chance" of catastrophic outcomes if ASI evolves unchecked or is weaponized. His views reflect the growing unease among AI experts that the technology they have nurtured could, in the worst-case scenario, become humanity’s undoing.
If you recall the end of the movie Oppenheimer, the physicist was sidelined, and the government assumed complete control over the technology. Despite all the secrecy surrounding the Manhattan Project, it could not prevent the proliferation of nuclear weapons. Similarly, with AI, the situation is even more precarious. Unlike the tightly controlled development of atom bomb technology, AI is being advanced primarily by commercial entities driven by profit motives. This unregulated race to innovate makes it highly likely that, within a few years, several nations—possibly ones we don’t expect—will develop Artificial Superintelligence (ASI) once a key breakthrough is achieved. And I’m not assuming the United States will necessarily be the first to reach that milestone.
The reality is that the cat is already out of the bag. The rapid progress in AI development, coupled with its accessibility, means the world is heading toward an unpredictable and potentially dangerous future. The consequences of this technology, like those of nuclear weapons, are inevitable. It was only ever a matter of time before it reached this point.
Buckle up, as I’ve told everyone about the new U.S. administration. We are on the brink of a rollercoaster ride into an unknown and unknowable future. The only certainty is that we must prepare for the challenges ahead as individuals and as a global society.
One other point - see Ted Chiang's new interview - https://lareviewofbooks.org/article/life-is-more-than-an-engineering-problem/
But on Hacker News he is getting slated with counterarguments on reasoning, and even "Ted Chiang revealed himself at his NeurIPS "Pluralism and Creativity" workshop to be... a great book author and not much else. His statements during his panels with the other AI researchers proved that he was not up to date on modern AI research."
https://news.ycombinator.com/item?id=42907268
I think I have written enough about this topic in the last 2 months, so I am summarizing my view in the following comment so I can focus on other topics in the future unless we have some new information that makes me rethink my stand (leaving the door ajar):
I am rephrasing another of my favorite quotes here.
“Everyone, including myself, is entitled to their opinions, but no one is entitled to their facts.”
The reality is that neither we nor the experts mentioned above truly understand how this technology works, when or if it will reach Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI), or what will happen afterward. This uncertainty is staggering. And yet, we are moving forward at breakneck speed, driven by competition, curiosity, and ambition. Knowing all this and continuing to pursue this technology relentlessly is not just reckless—it’s suicidal.
When we developed nuclear bombs, the process was approached as a controlled scientific endeavor grounded in well-understood physical laws. Even then, the consequences were dire: an arms race, the Cold War, and the constant looming threat of annihilation. But at least with nuclear technologies, we understood the science and had some ability to predict the outcomes. In contrast, AI is being developed openly and rapidly, applying technologies we know are effective but do not fully understand. We grasp that they work, but not how or why they do. We are creating systems whose inner workings are increasingly opaque, even to their creators.
What could go wrong? A great deal. The possibilities are chilling:
- Loss of control: An AGI could develop goals that diverge from human values, leading to unintended and potentially catastrophic outcomes. This is the so-called "alignment problem," and solving it is far from guaranteed.
- Weaponization: AGI or ASI could be exploited to create autonomous weapons, mass surveillance systems, or cyberattacks on an unprecedented scale.
Economic disruption: Mass automation could displace millions of jobs overnight, destabilizing economies and creating widespread social unrest like the one humanity has never seen before.
- Concentration of power: A few corporations or governments could monopolize AGI, exacerbating inequality and consolidating control over the world’s resources and decisions.
- Existential risk: An advanced AI might prioritize its survival or replication, acting in ways that threaten humanity’s existence.
AI is unlike any other technological innovation in history. With nuclear weapons, we built tools that could destroy the world, but they still required human decision-making to be unleashed. With AGI or ASI, we can create something that can act autonomously and unpredictably without human oversight or control.
We must stop and ask ourselves: Are we ready for this? Do we have safeguards to ensure this technology benefits humanity rather than destroys it? Progress for the sake of progress is not inherently virtuous—especially when the stakes are this high.
The relentless pursuit of AGI may be the most dangerous gamble humanity has ever undertaken.
So, I ask again: What could go wrong?
The EU are talking about a CERN for AI again - and the US a Manhattan Project! - I think with the paypal mafia they will push for US as the decider of how this goes, that as you say will be profit driven!
Yes, that was a strong reflection by Oppenheimer. I have read a lot about him and his reading of the Bhagavad Gita (I read it too). I also watched the movie! I remember was when I first started reading Richard Rhodes, Making of the Atom Bomb... it was such a big moment for me. This morning I even looked the photos again taken by Yosuke Yamahata, a Japanese military photographer one day after the bombing in Nagasaki, hauntingly horrific. A deadly reminder of technological power. There are more and more people in the labs starting to have the fear that you express.
Remember also John von Neumann, who was on the government panel for the Atomic bomb, in 1947 or 1948 he said we should scorch the earth of the soviet union before they get the bomb, then we will finally end all use of the bomb!
You are absolutely right - "We are on the brink of a rollercoaster ride into an unknown and unknowable future."
I met Sam Altman, and had a good conversation with him. Bengio is leading one of the EU AI act committees I sit on and is so strong and determined about a way forward. BUT and here is another problem from the meetings with Bengio, he believes that sadly, the alignment problem will be left up to AI to solve, he says it is not the solution we want, but the one that seems inevitable!
I must catch up with Demis Hassabis and Shane Legg's latest position on this - have you come across anything recently by either of them in the last month, other than the note you shared on Demis?
If consciousness is defined as "a living organism", then AI can never become conscious. Definitions aside, and reading the comments so far, and given the way the atomic bomb developed (in political terms), then it is extremely likely that AI, driven by competition, the profit motive, the lure of market dominance, and the insatiable and rather perverse curiosity of 'what technology can do, it must do' ... will develop in a direction that takes minimal account of negative social impact. The gung-ho 1980s mantra of "adapt or die" will be turbo-charged and the end consequences be far from technology serving the benefit of humankind.