I am using AI extensively in multiple areas, including completing a Masters in Education. I also offer to teach it to every person I come in contact with in my daily life, just to get them started. Very few people comprehend what is happening. Even though they may use AI minimally in their work lives, they have yet to use it in their personal lives or see the explosive impact it will have in our society. I recently attended an AI event where a CEO likened this pivotal AI moment to when Frederick Taylor's scientific management was applied to manufacturing and labour jobs. While we might say that that did have some worker benefits, the benefit mostly leaned to the company owners and less so to the workers, who often ended up in soul draining jobs as proverbial cogs in the machines. This is my concern with the advancement of AI if we are not engaged with preventing that.
That is a good connection with Frederick Taylor. Unfortunately, the broad gains go to a few and too many are cogs in the machine. Your approach of teaching every person you connect with is solid - the more we can raise awareness, the more others will start to catch up and try to gain some benefits. We must be engaged as you say.
Michael Simmons refers to three distinct groups that approach AI : AI tourist, AI Toolmakers , AI Orchestrators. These are more stages of progression. What I focus on is increasing AI tourists( the beginners), by raising their curiosity of AI through focusing on what matters to them. For example, recently I sat alongside a person, who has used Zero AI tools, to demonstrate how to use AI to manage their 2 illnesses in a way that better informed them, led via dialoguing with their own health care documentation.(inputed) At the end, they understood how they were interfering with their own recovery, as well as having detailed questions to ask their physician to gain better health care. Yes, I start with all the essential caveats regarding AI isn't always right. I realize this example can be fraught with complications re: the internet said it, so it's true. However, people need to understand that AI is NOT Googling something, which many people think it is. Critically, this opens the AI Tourist to fosters ongoing interest on the impact on AI. In fact, some share their Chat conversations with me, thereby establishing an ongoing connection with them. Starting with what matters to the human is the pathway to raising awareness. So, while One person at a time is slow, how else to improve people's overall knowledge of where we are heading? We must know concerns and benefits of AI, and we need everyone engaged , so that AI is for the benefit of humanity, not the machines and their owners
That's excellent I had not heard of the three distinct groups. It is a wonderful example that you give. And, ye, people think it is Googling something. I'm particularly passionate about empowering those AI Tourists and turning them into informed, engaged users. My approach is to start with what matters most to them personally, as you say "Starting with what matters to the human is the pathway to raising awareness." By demonstrating practical applications and addressing their concerns head-on, this sparks their curiosity and fosters a lasting appreciation for AI's potential.
These individual interactions can have a ripple effect, with people sharing their experiences and inspiring others to learn more.
Rather intriguing that your approach, "...demonstrating practical applications and addressing their concerns head-on, this sparks their curiosity and fosters a lasting appreciation for AI's potential" is in alignment with my own. This has always been my pedagogy, to find a person's delight interest, aka their curiosity or area of interest, and engage with them through conversation to spark learning beyond this area .It results in an optimum outcome for learning.
One of the key things that come to mind is that there seems to be a decline in the relevance of humans as labourers, as AI takes over some jobs in the near future. As a society, I view this as an opportunity for the unleashing of human capacity, or the end of it.
Currently, the main beneficiaries of this development at least in the short term are companies that can harness AI, just like your Google example. As company profits grow, income concentration also does.
While profit from value is good, there’s a case for reaching a more balanced approach and bringing about more comprehensive social support, from more services to the much debated universal income. For instance, if an artist can dedicate 100% to their art without aiming at profit, this could lead to new understandings of the human experience. Similarly, people would be drawn to work that is meaningful to them, rather than what can “make a living”.
I am excited at the prospect of us as a society at making the right choices. The ones that allow us to live in a more true, beautiful and good world.
I think the point about "meaningful work" is key; especially when survey after survey on 'work satisfaction' indicates about 70%+ would quit their jobs if they could afford it. So the question becomes: How, and to what extent, will/can AI contribute to meaningful work?
That is the key Joshua - I have always put myself fully into the work I have done. And found meaning in some form. Could I afford to quit? No, but I have walked away when I felt the work had overstayed its welcome - even when it was not an easy choice financially.
Will / can AI contribute to meaningful work? Good question - it is how we approach the work that matters and can AI open up the doors of abundance the developers promise? If UBI is an option, maybe many more will pursue what brings meaning to their life.
I'm sure AI will open up ways of working with it that even developers have not forseen. Once the general public is let loose on a new technology, especially one that can be accessed from home, who knows what might emerge in the way it is used? I think lock-down was a big wake-up call for many regarding what they were doing with their lives - and I think AI will have a similar effect.
Thank you Philip. Agree the point about companies benefiting in the short term is spot on. It's critical that we find ways to ensure a more equitable distribution of the benefits of AI. Increased social support, whether through expanded services or universal basic income, could be part of the solution. This could allow people to pursue more meaningful work, as you said. I like the idea of artist dedicated solely to their art. Imagine the flourishing of creativity and innovation that could result.
Like you, I'm excited about the possibilities, but we need to be proactive and make conscious choices to shape a future where AI benefits all of humanity. It's a chance to create a more true, beautiful, and good world, and I believe we can do it if we work together and build consensus from the grass roots so those in power take appropriate action.
If we fee we've evolved from monkeys who dropped down from the trees and headed out into grass plains then computers and machines seem like a great job.
If on the other hand we are angels who have come to earth to learn, grow and evolve, then it's a pretty poor performance so far.
I wholeheartedly support the latter. So we still, just as angels without tech, have huge potential.
The book cover gives it away. If we are only seen as thinking (as per Descartes), then yes a robot has a chance. However if we are seen as immortal being with souls who feel because they have a heart, then robots are like a dishwasher or a hoover. Sorry to be so blunt but we are not comparing like with like here.
My comments and questions below are not explicitly directed at your post but at everyone on both sides of the AI debate—those who believe AI will solve all our problems and those who fear AI will destroy us.
We will eventually develop a technology that equals or surpasses human intelligence across all domains. However, the timeline for this is uncertain. Predicting the future is incredibly difficult, especially for a technology we do not fully understand. I am reminded of a quote by William Goldman, a figure from the motion picture industry, which applies well here:
"Nobody knows anything... Not one person in the entire motion picture field knows for a certainty what's going to work. Every time out, it's a guess and, if you're lucky, an educated one."
With that perspective in mind, I've compiled questions I believe all of us should consider. Some of these I've asked before, while others are new:
1. Ethical and Societal Impacts of AI:
a) At what cost will we achieve advanced AI capabilities?
b) Who decides that the benefits of this technology outweigh the costs?
c) Who will benefit from this technology beyond the abundance of cheap goods and services?
d) Will this technology enhance essential aspects of a good life, such as happiness, fulfillment, purpose, and meaning?
e) How will society adapt if AI fundamentally changes or eliminates the concept of work?
f) What will be our purpose or identity in a world where work is no longer central to life?
g) How do we ensure equity and fairness in AI access so it doesn't widen societal divides?
h) What happens to marginalized or underrepresented groups in a world dominated by AI?
2. Decision-making and Governance:
a) How are we involved in AI development and deployment decision-making?
b) What forums or platforms exist to include public voices in shaping AI policies?
c) Who is responsible for ensuring ethical oversight of AI technologies?
d) How do we ensure transparency and accountability for decisions made by AI systems?
e) What mechanisms can we implement to regulate AI development globally while considering differing national interests?
f) What role should governments, corporations, and citizens play in shaping AI governance?
3. Risk Management and Consequences:
a) What would happen if this technology got out of control?
b) Is society ready for the potential consequences of AI's misuse or failure?
c) Who will pay for the consequences of AI failures, such as job displacement or unintended harm?
d) Once we lose the ability to build and innovate independently, how will we fight back if AI stops working or goes rogue?
e) AI will be a dual-use technology. How do we mitigate the risks of malicious AI, such as in warfare or misinformation campaigns?
f) What safeguards can be implemented to prevent AI systems from prioritizing harmful goals over human well-being?
4. AI's Role in Solving Global Problems:
a) If these problems are beyond human capability to solve alone, do we not need AI to help us?
b) What alternative plans or technologies could address these challenges if we do not rely on AI?
c) How do we ensure that AI solutions to global issues are sustainable and ethical?
d) What are the risks of over-relying on AI to solve problems that require human values and judgment?
5. Education and Human Development:
a) How will education evolve in a future dominated by AI?
b) Do we even need to educate ourselves on whether AI renders most current knowledge and skills obsolete?
c) What should we focus on teaching future generations to thrive in an AI-driven world?
d) How do we preserve critical thinking, creativity, and human ingenuity in a world where AI performs all or most tasks?
e) How do we avoid becoming overly reliant on AI and losing the ability to innovate and problem-solve independently?
6. Philosophical and Existential Questions:
a) What does it mean to be human in an age where AI surpasses human intelligence?
b) Should we develop limits to what AI can achieve, and who decides those limits?
7. Additional Questions:
a) How do we prevent the monopolization of AI technologies by a few corporations or nations?
b) How will cultural and societal differences shape the development and deployment of AI?
c) How do we ensure AI respects and preserves diverse cultural values and traditions?
d) How do we measure the success of AI—by its efficiency, ethical outcomes, or alignment with human values?
e) How do we balance innovation with caution to avoid unintended long-term consequences of AI development?
f) How would we know that we have reached ASI so we can control
These are all great questions and ones we seek to answer on my AI for Executives program in conjunction with the larger research community and people involved in psychology, philosophy, economics, etc. I am considering from February/March providing that program to a larger audience online - just working on my IP.
Number 7a - I wrote a report on that for the Polish government - in short, we can't - US companies will be monopolies and close the back office support infrastructures in europe - it is happening already, IBM, Apple, Citi, all closing back office work here due to AI.
Number 4, other than climate, we do not spend enough time on this.
Number 3. There is significant work in this domain -ultimately the alignment problem is very much a human problem, so we must have the safety rails in place.
Remember I mentioned that constraints are good in one of my comments in the last two weeks, and China will build something equivalent without requiring a higher-end chip and the money that the US companies are spending. Now, it has happened with DeepSeek R1. Read #3 and #4 in the above note.
There is a lot floating around on X about this - Questions over R1 with respect to compute and compute will still be needed for AGI and SSI - I need to speak with some people about this to clarify it.
By the way, amongst the noise of the $500 bn announcement, Google put another $1 billion into Anthropic!
If someone develops a better strategy or model—one that truly integrates common-sense reasoning into AI—we may finally see how much computational power is genuinely required. Current AI systems rely heavily on brute force, leveraging vast datasets and computational resources, but this approach has limits. To replace all jobs, we will need more than data-driven predictions and basic reasoning; we will need systems capable of understanding context, making causal inferences, and adapting to ambiguous situations.
From my experience (though anecdotal, as a sample size of one), brute force can only take you so far. At some point, breakthroughs in strategy and reasoning become essential to progress. The same applies to AI: without common sense, we will hit a wall regarding what these systems can achieve.
This is why companies like Google and Microsoft are hedging their bets. Both are investing heavily in diverse AI projects, recognizing that the future of AI is unpredictable. These investments are part of a broader $500B global push to advance AI capabilities, but whether brute force or smarter strategies will prevail remains to be seen.
You're absolutely right that simply throwing more data and computing power at the problem will only get us so far. True AI will require a deeper understanding of how humans reason and learn, and you've nailed it by emphasizing the importance of common sense, something we humans often take for granted, but it's essential for navigating the complexities of the real world :-)
I agree about the need to go beyond just predicting the next word or identifying patterns. AI needs to understand context, causality, and ambiguity to truly be useful in a wide range of applications.
I think it's important to consider the role of human cognition in developing AI with common sense, which I know the 3 main US labs are working on. Understanding how humans learn, reason, and make decisions could provide valuable insights... I also think we need to develop methods for explaining their reasoning processes. This will help us understand how they arrive at conclusions and build trust in their capabilities, which will be essential - the whole interpretability area is crucial for trust.
"To navigate the turbulent waters ahead, we must engage deeply with the ethical, philosophical, and practical dimensions of AGI, shifting our focus from individual success to collective progress."
Agreed, but who is 'we'? And how do 'we' have any meaningful input in the way AI is implemented? Do academic arguments sway the minds of tech CEOs whose primary obligations are fiduciary? Whatever happened to 'Technology Assessment' groups in the 1970s/80s? Did they influence how computing and the internet developed? Do 'we' have that much influence on politics? Or the military? Or Big-Pharma's next round of unproven vaccinations to be foisted on the public?
It seems to me at this techno-critical point, and at any point in history, that the power of 'we the people' to influence anyone or anything from mad kings to AI, is the power of choice as consumers, the power to withdraw our labour, and mass non-compliance. Beyond that, AI will be driven by the profit motive, and the ever-golden lure of 'what technology can do, it must do' -- just to see what happens -- like E=mc2 and the bomb -- whatever anyone says.
I completed an MSc in Robotics & Automation at Imperial College in 1984/5. My 'practical' was to design a robotic arm and control-algorithm to move stamped carbide cutting-tips from the stamper-machine, to a circular plate which, when optimally full, would be lined up to be fired at 1500degC (I think) to harden them.
The stamped cutting-tips were compressed powder at that stage, and quite friable. The delicacy of human fingers handled it well. It was a repeat cycle of about 40 seconds. Three women sat in a small room and did that for 8 hours/day. (I used air suction to do the job, made more difficult because the triangular shapes had a hole in the middle).
I don't know if it was ever implemented, and I wonder if my engineering prowess put these three women out of a job, which had a social element as they chatted to each other whilst working. To me the job was ultra-boring and ideal for robotics to serve a greater cause releasing these three women for greater things in life. I'm not sure if they saw it this way.
And I'm not sure to what extent AI suddenly releasing masses of people to freely choose to "do something better with their lives" is a sales pitch they would agree with.
Fascinating work. Yes, those jobs likely went to companies such as Universal Robots (Denmark) and Kuka who have robotic arms and replaced a lot of the 'mundane' tasks and the social contact that went with it.
In fairness, I did research a few years ago - when 10 auto companies installed over 100,000 robots on the factory floors and took awat those jobs, they added an additional 1.1 million jobs in sales, marketing, support, etc... but I think this time is different as AI and robots combined will take away many more jobs. For a period we will see new jobs but there will be disruption, so best to prepare. As you have :-)
Those are very important questions Joe. 1 and 2 are certainly good goals. I did exactly both of these, and I know several people in SV at the AI labs, that have acquired land.
3. Yes - these are highly relevant, being grounded and as much time as possible with loved ones are very good aspirations. I am working on this!
On a personal level, trying to carve out the 'niche' that will keep you relevant as long as possible. Building a case for you to be the best at your job, augmented with AI. Or finding another niche that you are passionate about and working toward that.
Do you know Max Bennett? - He wrote A Brief History of Intelligence. He is based in NY and managed to sell his first company and take time out to write the book, which is becoming a best seller - I had him on my AI program a year ago and since then the main AI labs have got hold of his book - I mention him, because he recognized AI was advancing rapidly and sought the book writing path, plus he just sold his 2nd company! He is about the same age as you - well worth connecting with him and trying to meet for ideas. https://www.abriefhistoryofintelligence.com/
1. OpenAI: AGI entails human-level cognitive capabilities across diverse tasks, including reasoning, problem-solving, learning, and adapting to novel situations, aiming to “automate human cognitive work.”
2. GoogleDeepMind: AGI is viewed as systems that learn and reason flexibly, akin to human intelligence, capable of performing a wide range of complex tasks, learning from limited data, and adapting to new situations.
3. AnthropicAI: AGI represents systems achieving human-level performance across various tasks, emphasizing safety, reliability, interpretability, and alignment with human values.
4.Google/Alphabet (Parent Company): AGI involves developing systems that can reason, learn, and problem-solve with general human-level capabilities, applicable across multiple intellectual tasks and practical applications.
5. Microsoft: AGI is defined as AI systems mastering diverse, sophisticated human intellectual tasks, with abilities to adapt, learn, reason, understand context, and solve real-world problems with minimal human guidance.
6. Meta: AGI focuses on AI that understands and interacts with the world in a human-like manner, capable of reasoning, planning, learning from limited data, and supporting human-machine collaboration across various tasks.
7. Amazon: AGI is implied through AI systems performing a wide variety of tasks without explicit programming for each, emphasizing practical deployment to solve real-world problems, particularly in e-commerce and cloud services.
8. Baidu: AGI involves creating AI with broad-ranging problem-solving abilities similar to humans, capable of integrating information from diverse sources to achieve human-like cognition supporting their products.
9. Tesla/xAI: AGI is defined as AI smarter than the smartest human, encompassing a broad set of cognitive skills, capable of performing the vast majority of human activities, with a focus on being truth-seeking and understanding the universe.
10. Allen Institute for AI: AGI is seen as systems exhibiting human-level performance across tasks requiring reasoning, common sense, and adaptability, with research focusing on benchmarks to measure progress towards such intelligence.
I am using AI extensively in multiple areas, including completing a Masters in Education. I also offer to teach it to every person I come in contact with in my daily life, just to get them started. Very few people comprehend what is happening. Even though they may use AI minimally in their work lives, they have yet to use it in their personal lives or see the explosive impact it will have in our society. I recently attended an AI event where a CEO likened this pivotal AI moment to when Frederick Taylor's scientific management was applied to manufacturing and labour jobs. While we might say that that did have some worker benefits, the benefit mostly leaned to the company owners and less so to the workers, who often ended up in soul draining jobs as proverbial cogs in the machines. This is my concern with the advancement of AI if we are not engaged with preventing that.
That is a good connection with Frederick Taylor. Unfortunately, the broad gains go to a few and too many are cogs in the machine. Your approach of teaching every person you connect with is solid - the more we can raise awareness, the more others will start to catch up and try to gain some benefits. We must be engaged as you say.
Michael Simmons refers to three distinct groups that approach AI : AI tourist, AI Toolmakers , AI Orchestrators. These are more stages of progression. What I focus on is increasing AI tourists( the beginners), by raising their curiosity of AI through focusing on what matters to them. For example, recently I sat alongside a person, who has used Zero AI tools, to demonstrate how to use AI to manage their 2 illnesses in a way that better informed them, led via dialoguing with their own health care documentation.(inputed) At the end, they understood how they were interfering with their own recovery, as well as having detailed questions to ask their physician to gain better health care. Yes, I start with all the essential caveats regarding AI isn't always right. I realize this example can be fraught with complications re: the internet said it, so it's true. However, people need to understand that AI is NOT Googling something, which many people think it is. Critically, this opens the AI Tourist to fosters ongoing interest on the impact on AI. In fact, some share their Chat conversations with me, thereby establishing an ongoing connection with them. Starting with what matters to the human is the pathway to raising awareness. So, while One person at a time is slow, how else to improve people's overall knowledge of where we are heading? We must know concerns and benefits of AI, and we need everyone engaged , so that AI is for the benefit of humanity, not the machines and their owners
That's excellent I had not heard of the three distinct groups. It is a wonderful example that you give. And, ye, people think it is Googling something. I'm particularly passionate about empowering those AI Tourists and turning them into informed, engaged users. My approach is to start with what matters most to them personally, as you say "Starting with what matters to the human is the pathway to raising awareness." By demonstrating practical applications and addressing their concerns head-on, this sparks their curiosity and fosters a lasting appreciation for AI's potential.
These individual interactions can have a ripple effect, with people sharing their experiences and inspiring others to learn more.
Rather intriguing that your approach, "...demonstrating practical applications and addressing their concerns head-on, this sparks their curiosity and fosters a lasting appreciation for AI's potential" is in alignment with my own. This has always been my pedagogy, to find a person's delight interest, aka their curiosity or area of interest, and engage with them through conversation to spark learning beyond this area .It results in an optimum outcome for learning.
Exactly, unless they are deeply interested then learning is often passive instead of active.
Thank you I appreciate the post!
One of the key things that come to mind is that there seems to be a decline in the relevance of humans as labourers, as AI takes over some jobs in the near future. As a society, I view this as an opportunity for the unleashing of human capacity, or the end of it.
Currently, the main beneficiaries of this development at least in the short term are companies that can harness AI, just like your Google example. As company profits grow, income concentration also does.
While profit from value is good, there’s a case for reaching a more balanced approach and bringing about more comprehensive social support, from more services to the much debated universal income. For instance, if an artist can dedicate 100% to their art without aiming at profit, this could lead to new understandings of the human experience. Similarly, people would be drawn to work that is meaningful to them, rather than what can “make a living”.
I am excited at the prospect of us as a society at making the right choices. The ones that allow us to live in a more true, beautiful and good world.
Thank you,
I think the point about "meaningful work" is key; especially when survey after survey on 'work satisfaction' indicates about 70%+ would quit their jobs if they could afford it. So the question becomes: How, and to what extent, will/can AI contribute to meaningful work?
That is the key Joshua - I have always put myself fully into the work I have done. And found meaning in some form. Could I afford to quit? No, but I have walked away when I felt the work had overstayed its welcome - even when it was not an easy choice financially.
Will / can AI contribute to meaningful work? Good question - it is how we approach the work that matters and can AI open up the doors of abundance the developers promise? If UBI is an option, maybe many more will pursue what brings meaning to their life.
I'm sure AI will open up ways of working with it that even developers have not forseen. Once the general public is let loose on a new technology, especially one that can be accessed from home, who knows what might emerge in the way it is used? I think lock-down was a big wake-up call for many regarding what they were doing with their lives - and I think AI will have a similar effect.
Very good example with lockdown - yes, it will be remarkable what people find to do with more time:-)
Thank you Philip. Agree the point about companies benefiting in the short term is spot on. It's critical that we find ways to ensure a more equitable distribution of the benefits of AI. Increased social support, whether through expanded services or universal basic income, could be part of the solution. This could allow people to pursue more meaningful work, as you said. I like the idea of artist dedicated solely to their art. Imagine the flourishing of creativity and innovation that could result.
Like you, I'm excited about the possibilities, but we need to be proactive and make conscious choices to shape a future where AI benefits all of humanity. It's a chance to create a more true, beautiful, and good world, and I believe we can do it if we work together and build consensus from the grass roots so those in power take appropriate action.
Second comment.
I'll paraphrase Terence McKenna.
If we fee we've evolved from monkeys who dropped down from the trees and headed out into grass plains then computers and machines seem like a great job.
If on the other hand we are angels who have come to earth to learn, grow and evolve, then it's a pretty poor performance so far.
I wholeheartedly support the latter. So we still, just as angels without tech, have huge potential.
The book cover gives it away. If we are only seen as thinking (as per Descartes), then yes a robot has a chance. However if we are seen as immortal being with souls who feel because they have a heart, then robots are like a dishwasher or a hoover. Sorry to be so blunt but we are not comparing like with like here.
So true Vincent, tomorrow I will post on my experiences of developers playing God and building a new species!
Sounds intriguing. I look forward to reading it.
The Gods must be in the air, I have just written this: https://shorturl.at/QbVNU
My comments and questions below are not explicitly directed at your post but at everyone on both sides of the AI debate—those who believe AI will solve all our problems and those who fear AI will destroy us.
We will eventually develop a technology that equals or surpasses human intelligence across all domains. However, the timeline for this is uncertain. Predicting the future is incredibly difficult, especially for a technology we do not fully understand. I am reminded of a quote by William Goldman, a figure from the motion picture industry, which applies well here:
"Nobody knows anything... Not one person in the entire motion picture field knows for a certainty what's going to work. Every time out, it's a guess and, if you're lucky, an educated one."
With that perspective in mind, I've compiled questions I believe all of us should consider. Some of these I've asked before, while others are new:
1. Ethical and Societal Impacts of AI:
a) At what cost will we achieve advanced AI capabilities?
b) Who decides that the benefits of this technology outweigh the costs?
c) Who will benefit from this technology beyond the abundance of cheap goods and services?
d) Will this technology enhance essential aspects of a good life, such as happiness, fulfillment, purpose, and meaning?
e) How will society adapt if AI fundamentally changes or eliminates the concept of work?
f) What will be our purpose or identity in a world where work is no longer central to life?
g) How do we ensure equity and fairness in AI access so it doesn't widen societal divides?
h) What happens to marginalized or underrepresented groups in a world dominated by AI?
2. Decision-making and Governance:
a) How are we involved in AI development and deployment decision-making?
b) What forums or platforms exist to include public voices in shaping AI policies?
c) Who is responsible for ensuring ethical oversight of AI technologies?
d) How do we ensure transparency and accountability for decisions made by AI systems?
e) What mechanisms can we implement to regulate AI development globally while considering differing national interests?
f) What role should governments, corporations, and citizens play in shaping AI governance?
3. Risk Management and Consequences:
a) What would happen if this technology got out of control?
b) Is society ready for the potential consequences of AI's misuse or failure?
c) Who will pay for the consequences of AI failures, such as job displacement or unintended harm?
d) Once we lose the ability to build and innovate independently, how will we fight back if AI stops working or goes rogue?
e) AI will be a dual-use technology. How do we mitigate the risks of malicious AI, such as in warfare or misinformation campaigns?
f) What safeguards can be implemented to prevent AI systems from prioritizing harmful goals over human well-being?
4. AI's Role in Solving Global Problems:
a) If these problems are beyond human capability to solve alone, do we not need AI to help us?
b) What alternative plans or technologies could address these challenges if we do not rely on AI?
c) How do we ensure that AI solutions to global issues are sustainable and ethical?
d) What are the risks of over-relying on AI to solve problems that require human values and judgment?
5. Education and Human Development:
a) How will education evolve in a future dominated by AI?
b) Do we even need to educate ourselves on whether AI renders most current knowledge and skills obsolete?
c) What should we focus on teaching future generations to thrive in an AI-driven world?
d) How do we preserve critical thinking, creativity, and human ingenuity in a world where AI performs all or most tasks?
e) How do we avoid becoming overly reliant on AI and losing the ability to innovate and problem-solve independently?
6. Philosophical and Existential Questions:
a) What does it mean to be human in an age where AI surpasses human intelligence?
b) Should we develop limits to what AI can achieve, and who decides those limits?
7. Additional Questions:
a) How do we prevent the monopolization of AI technologies by a few corporations or nations?
b) How will cultural and societal differences shape the development and deployment of AI?
c) How do we ensure AI respects and preserves diverse cultural values and traditions?
d) How do we measure the success of AI—by its efficiency, ethical outcomes, or alignment with human values?
e) How do we balance innovation with caution to avoid unintended long-term consequences of AI development?
f) How would we know that we have reached ASI so we can control
Enough for now.
The following is an excellent post talking about challenges:
https://www.strangeloopcanon.com/p/what-would-a-world-with-agi-look
These are all great questions and ones we seek to answer on my AI for Executives program in conjunction with the larger research community and people involved in psychology, philosophy, economics, etc. I am considering from February/March providing that program to a larger audience online - just working on my IP.
Number 7a - I wrote a report on that for the Polish government - in short, we can't - US companies will be monopolies and close the back office support infrastructures in europe - it is happening already, IBM, Apple, Citi, all closing back office work here due to AI.
Number 4, other than climate, we do not spend enough time on this.
Number 3. There is significant work in this domain -ultimately the alignment problem is very much a human problem, so we must have the safety rails in place.
Cyber threats are a big one!
Dan Hendrycks has a good online program on safety, here is the book - https://www.aisafetybook.com/
and course - https://www.aisafetybook.com/curriculum
https://substack.com/@microexcellence/note/c-87818906
Remember I mentioned that constraints are good in one of my comments in the last two weeks, and China will build something equivalent without requiring a higher-end chip and the money that the US companies are spending. Now, it has happened with DeepSeek R1. Read #3 and #4 in the above note.
There is a lot floating around on X about this - Questions over R1 with respect to compute and compute will still be needed for AGI and SSI - I need to speak with some people about this to clarify it.
By the way, amongst the noise of the $500 bn announcement, Google put another $1 billion into Anthropic!
If someone develops a better strategy or model—one that truly integrates common-sense reasoning into AI—we may finally see how much computational power is genuinely required. Current AI systems rely heavily on brute force, leveraging vast datasets and computational resources, but this approach has limits. To replace all jobs, we will need more than data-driven predictions and basic reasoning; we will need systems capable of understanding context, making causal inferences, and adapting to ambiguous situations.
From my experience (though anecdotal, as a sample size of one), brute force can only take you so far. At some point, breakthroughs in strategy and reasoning become essential to progress. The same applies to AI: without common sense, we will hit a wall regarding what these systems can achieve.
This is why companies like Google and Microsoft are hedging their bets. Both are investing heavily in diverse AI projects, recognizing that the future of AI is unpredictable. These investments are part of a broader $500B global push to advance AI capabilities, but whether brute force or smarter strategies will prevail remains to be seen.
You're absolutely right that simply throwing more data and computing power at the problem will only get us so far. True AI will require a deeper understanding of how humans reason and learn, and you've nailed it by emphasizing the importance of common sense, something we humans often take for granted, but it's essential for navigating the complexities of the real world :-)
I agree about the need to go beyond just predicting the next word or identifying patterns. AI needs to understand context, causality, and ambiguity to truly be useful in a wide range of applications.
I think it's important to consider the role of human cognition in developing AI with common sense, which I know the 3 main US labs are working on. Understanding how humans learn, reason, and make decisions could provide valuable insights... I also think we need to develop methods for explaining their reasoning processes. This will help us understand how they arrive at conclusions and build trust in their capabilities, which will be essential - the whole interpretability area is crucial for trust.
"To navigate the turbulent waters ahead, we must engage deeply with the ethical, philosophical, and practical dimensions of AGI, shifting our focus from individual success to collective progress."
Agreed, but who is 'we'? And how do 'we' have any meaningful input in the way AI is implemented? Do academic arguments sway the minds of tech CEOs whose primary obligations are fiduciary? Whatever happened to 'Technology Assessment' groups in the 1970s/80s? Did they influence how computing and the internet developed? Do 'we' have that much influence on politics? Or the military? Or Big-Pharma's next round of unproven vaccinations to be foisted on the public?
It seems to me at this techno-critical point, and at any point in history, that the power of 'we the people' to influence anyone or anything from mad kings to AI, is the power of choice as consumers, the power to withdraw our labour, and mass non-compliance. Beyond that, AI will be driven by the profit motive, and the ever-golden lure of 'what technology can do, it must do' -- just to see what happens -- like E=mc2 and the bomb -- whatever anyone says.
I completed an MSc in Robotics & Automation at Imperial College in 1984/5. My 'practical' was to design a robotic arm and control-algorithm to move stamped carbide cutting-tips from the stamper-machine, to a circular plate which, when optimally full, would be lined up to be fired at 1500degC (I think) to harden them.
The stamped cutting-tips were compressed powder at that stage, and quite friable. The delicacy of human fingers handled it well. It was a repeat cycle of about 40 seconds. Three women sat in a small room and did that for 8 hours/day. (I used air suction to do the job, made more difficult because the triangular shapes had a hole in the middle).
I don't know if it was ever implemented, and I wonder if my engineering prowess put these three women out of a job, which had a social element as they chatted to each other whilst working. To me the job was ultra-boring and ideal for robotics to serve a greater cause releasing these three women for greater things in life. I'm not sure if they saw it this way.
And I'm not sure to what extent AI suddenly releasing masses of people to freely choose to "do something better with their lives" is a sales pitch they would agree with.
Fascinating work. Yes, those jobs likely went to companies such as Universal Robots (Denmark) and Kuka who have robotic arms and replaced a lot of the 'mundane' tasks and the social contact that went with it.
In fairness, I did research a few years ago - when 10 auto companies installed over 100,000 robots on the factory floors and took awat those jobs, they added an additional 1.1 million jobs in sales, marketing, support, etc... but I think this time is different as AI and robots combined will take away many more jobs. For a period we will see new jobs but there will be disruption, so best to prepare. As you have :-)
Thanks for the thoughts Colin. I am trying to think through what to do personally as a 33 year old navigating this. My initial reactions have been
1) A desire to own physical land in a place that "feels safe." I live in NYC now, but am feeling a desire to move somewhere more remote.
2) Invest less in a 401k and try to get more exposure to AI stocks which still seem underpriced
3) Live in the present moment, keep developing a meditation practice, and spend a lot of time with my loved ones
Anything you've done personally that you feel has been productive?
Those are very important questions Joe. 1 and 2 are certainly good goals. I did exactly both of these, and I know several people in SV at the AI labs, that have acquired land.
3. Yes - these are highly relevant, being grounded and as much time as possible with loved ones are very good aspirations. I am working on this!
On a personal level, trying to carve out the 'niche' that will keep you relevant as long as possible. Building a case for you to be the best at your job, augmented with AI. Or finding another niche that you are passionate about and working toward that.
Do you know Max Bennett? - He wrote A Brief History of Intelligence. He is based in NY and managed to sell his first company and take time out to write the book, which is becoming a best seller - I had him on my AI program a year ago and since then the main AI labs have got hold of his book - I mention him, because he recognized AI was advancing rapidly and sought the book writing path, plus he just sold his 2nd company! He is about the same age as you - well worth connecting with him and trying to meet for ideas. https://www.abriefhistoryofintelligence.com/
Ah cool! I will read his book -- he actually went to my alma mater so i know of him. Appreciate your thoughts
Great. Big coincidence - he is a super kind person, I'm sure he would be happy to talk.
Some other definitions of AGI
1. OpenAI: AGI entails human-level cognitive capabilities across diverse tasks, including reasoning, problem-solving, learning, and adapting to novel situations, aiming to “automate human cognitive work.”
2. GoogleDeepMind: AGI is viewed as systems that learn and reason flexibly, akin to human intelligence, capable of performing a wide range of complex tasks, learning from limited data, and adapting to new situations.
3. AnthropicAI: AGI represents systems achieving human-level performance across various tasks, emphasizing safety, reliability, interpretability, and alignment with human values.
4.Google/Alphabet (Parent Company): AGI involves developing systems that can reason, learn, and problem-solve with general human-level capabilities, applicable across multiple intellectual tasks and practical applications.
5. Microsoft: AGI is defined as AI systems mastering diverse, sophisticated human intellectual tasks, with abilities to adapt, learn, reason, understand context, and solve real-world problems with minimal human guidance.
6. Meta: AGI focuses on AI that understands and interacts with the world in a human-like manner, capable of reasoning, planning, learning from limited data, and supporting human-machine collaboration across various tasks.
7. Amazon: AGI is implied through AI systems performing a wide variety of tasks without explicit programming for each, emphasizing practical deployment to solve real-world problems, particularly in e-commerce and cloud services.
8. Baidu: AGI involves creating AI with broad-ranging problem-solving abilities similar to humans, capable of integrating information from diverse sources to achieve human-like cognition supporting their products.
9. Tesla/xAI: AGI is defined as AI smarter than the smartest human, encompassing a broad set of cognitive skills, capable of performing the vast majority of human activities, with a focus on being truth-seeking and understanding the universe.
10. Allen Institute for AI: AGI is seen as systems exhibiting human-level performance across tasks requiring reasoning, common sense, and adaptability, with research focusing on benchmarks to measure progress towards such intelligence.
common threads:
- generalization
- reasoning and problem-solving
- adaptibility and experiential learning
- alignment and safety
From - https://x.com/RileyRalmuto/status/1881596921078337763