As always, Colin, a thought provoking post. I agree that, "the gradual nature of this process makes it difficult to resist", and thus challenging to perceive when, and how, and what the final impact is. I concur that there will be no single moment of crisis. It is challenging to be cognitively informed and aware , and yet, still challenged on what to do about it. I extensively use AI, but, my concern that there will be a moment when, "we wake up to find that our influence is a historical footnote" is increasing. As per your suggestion, I will read the paper, later today. I am halfway through the newly released book "SuperAgency" (Reid Hoffman, Greg Beato), and look forward to your post on it. I'm both concurring and disagreeing with their analysis, but congitive dissonance is my norm in regards to AI.
Thank you. The material is all in the great paper, distilling tsuch information is what I enjoy.
I'm currently reading a book by Christopher Clark, The Sleepwalkers: How Europe Went to War in 1914. It is the same over and over again. The signs were obvious in 1939, and they were in February 2022. But the vast majority of us fail to try to ward of the threat of major disruption. As you say it is such a challenge. My deepest concern is that a cognitive decline (and of course job losses) happens long before action is taken.
It is right, in my view, to extensively use AI, yet, also still build our cognitive muscles, by reading, educating, learning as you show.
"SuperAgency" is a great title - we need to build our agency. I note that Sam Altman said we should try to do things not 10x better but 100x better. That is a bit extreme, but we must strive to do that bit more, especially, in my mind, more thinking.
AI probably has cognitive dissonance about us too.
Pretty sure the paper itself is great (I'll get around to reading it soon) but you also provide a lot of value by thoughtfully breaking it down for us.
And I think cognitive decline is already all around us - response to these game-changing developments is always a step behind. Schools and teachers and parents all have to take time to orient and review how they teach kids to live with technology that's already in their hands. There's a whole generation up and coming that has been using devices as a crutch... I guess we just have to keep on calling them back to human thinking, human action, and trust they'll come.
Sadly I agree with you "cognitive decline is already all around us". The reverse Flynn effect, which I have written about a few times, shows this. It is contentious because of how "IQ" is measured, but like you write in your essay, we settle for AI output. I have worked with CEO's who have presented data from AI and admitted to not checking the content thoroughly, relying on AI and "becoming lazy."
We have to use a level of urgency to reverse the cognitive decline and as you say keep calling them back to human thinking. Trust is a key word.
In an essay, I described AI in terms of the famous "emperor and the chessboard" thought experiment. When the emperor asked the inventor what he wanted as an award for inventing the game of chess, the inventor suggested a grain of rice, doubling for each space on the chessboard.
This is a deceptively large ask, essentially handing all the food the kingdom could ever produce to the inventor. In effect, making the inventor the emperor. AI is a lot like this, the advancements in computing over the last 55 years were important, but these next few years make all the difference in terms of real outcomes.
The question is, will we retain our position as the "master" or emperor, or will we inadvertently cede our power to the invention due to the rapidity of change?
Another fine and thought-provoking article, thank you.
"... decline of human thinking and agency" seems to accompany various cases of technology being implemented in the work-place. To get people off the land and to go and work in the driven-by-the-clock factories of the Industrial Revolution was certainly a significant loss of agency. As was Fordism in the 1920s with production-line jobs; only made possible by Henry Ford doubling their wages - otherwise no-one was interested in such soul-destroying jobs, nor the loss of their agency/autonomy.
The growing use of credit/debit cards over the last 75 years is another example of the relentless 'war of attrition' on human agency ... all part of a generational switch of thinking in preparation for the acceptance of digital-only money. Well, maybe it's not a 'war of attrition' but merely this is what happens when technology meets human affairs. I agree it's a slow process but it's relentless and seems to move in one direction. With AI the stakes are high.
My guide on these matters is often "The Riddle of Amish Culture" (by Donald Kraybill, 1989). There is a resilient largely independent community who sign up to certain beliefs and principles, and who respect their elders who decide in what way they will engage with various technologies. I am intrigued to know how they will 'gate-keep' AI.
That's a good example Joshua, the decline of human agency and your examples like the Industrial Revolution and Fordism. It really highlights how this isn't a new phenomenon, but rather a recurring pattern throughout history. I would add schools, with their 19th century arrangements and teaching, seem to devour agency.
Your point about the shift towards digital currency is also correct. It's easy to overlook how these seemingly small 'changes' can contribute to a larger erosion of autonomy.
The Amish example is fascinating and offers a unique perspective on how to thoughtfully engage with technology. I agree and likewise am curious about how they'll approach AI. It's a question worth exploring further. I have a friend who spent time over the Christmas period with an Amish family and community, I will ask him.
I’ll respond later today when I have more time, as this is another topic I’ve spent considerable effort thinking about. Like yesterday, I’ll likely need a long walk to organize my thoughts. In the meantime, if you haven’t visited the following website, it’s one of the best starting points for understanding AI risk. I plan to use ChatGPT to analyze their document and include a few items in my response:
My thought process, which I’ve mentioned in a previous comment on decision-making regarding risks, has always been to identify all possible worst-case scenarios and focus on either avoiding them or preparing for actions to take if they occur. I approach this issue in the same way. As I’ve said, everything will eventually happen if it is something doesn’t violate physical laws. The timing may vary significantly, but it will happen, regardless of whether it’s the right thing to do. Even if it’s the worst thing for society, someone will attempt it sooner or later.
This is why the “Five Laws of Human Stupidity” are so vital to look at how some of us operate(I am not trying to be harsh here but just stating our species tendency):
1. Always and inevitably, everyone underestimates the number of stupid individuals in circulation.
2. The probability that a person is stupid is independent of any other characteristic of that person.
3. A stupid person is someone who causes harm to another person or group while deriving no benefit to themselves, and might even suffer losses.
4. Non-stupid people always underestimate the damaging power of stupid individuals.
5. A stupid person is the most dangerous type of person.
To this, I would add a sixth law, which I’ve described above:
Even if something is destructive for most of society, if not all, minus a small group, someone will still attempt it simply because it’s physically possible if it benefits them. This 6th principle applies to many things, and if we end up getting into a worst-case scenario because of the unbridled pursuit that we should have growth at all costs, we will make this 6th law accurate.
Although I don’t consider myself a genius, I generally operate with the mindset in the following quote for specific topics, with AI being one of them. I see tremendous potential benefits if we manage to control AI, but I’m also deeply concerned about the costs we could face if even one worst-case scenario becomes reality. As history shows, control is often elusive.
As F. Scott Fitzgerald wrote:
“The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind simultaneously and still retain the ability to function.”
I know Peter Slattery, the lead behind this - https://airisk.mit.edu/ I was hoping he would teach it on my program but he got overwhelmed when they released version 1.
I use the excel of risks for the work on the code of practice at the EU AI Act.
That quote is fantastic... and the five laws too.
I just started re-reading Christopher Clark's The Sleepwalkers: How Europe Went to War in 1914. It is also clear about how easily many ignore the signs and then are led like proverbial lemmings off the cliff!
If you have not read the following note, to which I also added a counterargument in the reply from another author, let me know your thoughts when you get a chance.
This was longer than the length of a comment is allowed, so splitting it into two parts:
Part 1:
This is an interesting paper indeed!
"This required a much longer walk and more time to write than I initially anticipated, but I believe organizing my thoughts and putting them into words was worth the effort. I hope you'll find the same value in reading it as I did in writing it."
The Future of Humanity in an AI-Driven World
As I mentioned in my earlier thoughts, humanity has a track record of achieving anything that does not violate the laws of physics, even if it spans multiple generations. When we ask, "Can we build an intelligence greater than our own?" the answer lies not in "if" but in "when." As far as we know, no physical laws prevent us from doing so. Therefore, it is only a matter of time and effort.
The question then becomes: What will the world look like once we cross this threshold?
The Journey to Building Superior Intelligence
I firmly believe that most of our work today will eventually be automated. There's nothing inherently irreplaceable about most human tasks that a sufficiently advanced system with the right blend of intelligence, dexterity, and tacit knowledge cannot replicate. The challenge currently lies in equipping machines with tacit knowledge — the unspoken, intuitive understanding humans acquire through experience. This includes solving edge cases, developing sensors akin to human senses like touch and sight, and building common sense and intuition into machine intelligence.
But history is on our side. Evolution has already achieved this, albeit over millions of years. We tend to find a way once humans recognize that something is possible. History is full of examples where perceived "impossibilities" were shattered. Take the famous case of the four-minute mile: Before May 6, 1954, running a mile under four minutes was considered impossible. Yet, when Roger Bannister broke that barrier, the floodgates opened. Within 46 days, John Landy surpassed Bannister's record, and just a year later, three runners broke the four-minute barrier in a single race. What changed? The belief that it could be done.
We'll likely see a similar progression in building intelligence. Progress will be incremental, but each breakthrough will reinforce the belief that achieving superhuman intelligence is possible. Eventually, we'll cross the threshold — not through a sudden leap but through steady, collective effort. Whether it happens with today's AI models, next-generation architectures, or entirely new paradigms will be a minor detail in the cosmic scale of things as long as humanity survives long enough to see it through.
Humanity's Natural Path: Building Tools to Surpass Ourselves
Landing at the top of the intelligence pyramid was not an evolutionary mistake. If humanity hadn't done it, another species likely would have, though it may have taken longer. From dawn, we've built tools to augment our abilities and overcome limitations — from stone tools to computers. AI represents the next phase in this journey.
Today, we're approaching the limits of what our biological brains can achieve. The challenges humanity faces in the future will require capabilities far beyond the processing, memory, and problem-solving power of all human brains combined. This makes it evident that next-generation tools — including AI, synthetic biology, quantum computing, and more — will be necessary to help us overcome these challenges, improve human life, and extend our capabilities.
Imagine a future where AI and synthetic biology enable us to run a mile in under three minutes, make us superhuman, or extend our lifespans far beyond current limits. These possibilities are not fantasies but logical extensions of humanity's drive to push boundaries. However, with significant progress comes great responsibility: we must also consider these advancements' societal, ethical, and existential implications.
Human Purpose in the Post-Work Era
One of the most profound questions we'll face is how humanity will find meaning, purpose, and happiness in a world where most work is automated. For centuries, work has been central to our sense of purpose. Yet, hunter-gatherer societies provide an interesting counterpoint: they worked only a few hours a day to secure food and spent the rest of their time in leisure. If AI and robots take over most of our work, humanity may return to a similar lifestyle, where a few hours of effort suffice, leaving the rest of the day for leisure, creativity, and self-fulfillment.
However, the transition will not be without challenges. Below, I outline potential scenarios for humanity's future in an AI-dominated world:
1. Optimistic Scenarios
These scenarios envision a future where humankind thrives alongside AI:
a) Universal Basic Income (UBI) and a Post-Work Society: Governments implement UBI to ensure everyone's basic needs are met, regardless of automation.
Lifestyle Impact:
- People focus on creative, intellectual, and personal development activities.
- Society values non-economic contributions like art, science, caregiving, and community
building.
- Work becomes optional, and individuals pursue passions without financial pressure.
Key Challenges: Requires wealth redistribution, political will, and global cooperation.
b) Flourishing Through Creativity and Leisure: With AI handling mundane tasks, humanity can focus on arts, philosophy, and innovation.
Lifestyle Impact:
- Growth in fields like art, philosophy, and entrepreneurship.
- People spend more time with family, traveling, or engaging in hobbies.
- Increased emphasis on mental and physical well-being.
Key Challenges: Ensuring equitable access to resources and opportunities for self-fulfillment.
c) Collaborative Human-AI Societies: AI augments human abilities rather than replacing them entirely.
Lifestyle Impact:
- Humans focus on strategic, emotional, and interpersonal tasks, while AI handles technical
or repetitive work.
- New hybrid professions emerge where humans and AI collaborate.
- Education systems adapt to teach people how to work effectively with AI.
Key Challenges: Need for continuous upskilling and addressing disparities in access to AI technologies.
2. Neutral Scenarios
These scenarios involve a mix of progress and inequality:
a) Stratified Society: A small elite controls AI systems and their wealth, while the majority live modestly on subsidies like UBI.
Lifestyle Impact:
- Increased participation in community service, environmental conservation, and other
altruistic efforts.
- Social bonds strengthen as people engage in collective activities.
- Work as a concept redefines itself as a means of social contribution rather than economic
survival.
Key Challenges: Ensuring people find purpose and fulfillment without traditional jobs.
b) Shift to Volunteerism: With financial needs met, people engage in unpaid work like community service or environmental conservation.
Lifestyle Impact:
- A significant portion of the population lives in poverty or precarity.
- Mass unemployment leads to social unrest and mental health crises.
- Wealth becomes concentrated in the hands of a few who own AI systems.
Key Challenges: Preventing societal collapse and addressing systemic inequality.
3. Pessimistic Scenarios
These scenarios reflect potential dystopian outcomes:
a) Widespread Unemployment and Poverty: Mass unemployment leads to societal unrest and mental health crises without proper safety nets.
Lifestyle Impact:
- A significant portion of the population lives in poverty or precarity.
- Mass unemployment leads to social unrest and mental health crises.
- Wealth becomes concentrated in the hands of a few who own AI systems.
Key Challenges: Preventing societal collapse and addressing systemic inequality.
b) Loss of Purpose and Identity: In a work-free world, humans struggle to find meaning, leading to escapism, depression, or social fragmentation.
Lifestyle Impact:
- Many people feel disconnected, purposeless, or depressed.
- Societal values shift away from productivity, but this transition is difficult for individuals
accustomed to work-centric lives.
- Increased reliance on escapism (e.g., virtual reality, entertainment) to cope with existential
crises.
Key Challenges: Redefining purpose and fostering mental well-being.
c) Authoritarian Control: AI systems are used for surveillance, propaganda, and control, suppressing individuality and creativity.
Lifestyle Impact:
- Surveillance and AI-driven social credit systems restrict freedoms.
- Citizens rely heavily on state or corporate assistance, with little autonomy.
- Creativity and individuality may be suppressed in favor of conformity.
Key Challenges: Preventing totalitarian regimes and ensuring human rights.
4. Mixed Scenarios
The future is unlikely to be uniform, with regions or groups experiencing different outcomes:
Hybrid Societies: Some nations adopt optimistic policies, while others fall into dystopian patterns.
Global Inequality: Wealthy nations thrive with AI, while poorer nations struggle to adapt.
Cultural Shifts: Societies may prioritize valuing relationships, spirituality, and sustainability over material wealth.
Regardless of the scenario, one thing is clear: As you may have heard in the Star Trek franchise, resistance is futile. Fighting against AI's development is not only impractical but counterproductive. Instead, we must focus on governance, awareness, and cooperation to shape a society that can adapt to massive change.
Unfortunately, decisions about AI are being made by a small group with little external oversight, potentially risking humanity's future. There is no global forum where billions of people or a larger subset of billions can have a say in the decisions that will impact everyone. This lack of governance is dangerous, not just for worst-case scenarios like an uncontrollable AGI but also for the concentration of power in the hands of a few.
If humanity is to thrive in the age of AI, we must unite to build a more equitable society. This requires collaboration on a global scale, transparent decision-making, and a shared vision of the future. While we cannot predict which scenario will ultimately unfold, we can strive to ensure it maximizes the collective good.
As humans, we've shown time and time again that we can achieve extraordinary things when we set our minds to it. Now is the time to focus on ensuring that AI becomes a tool for the betterment of all humanity.
Will carefully digest your notes above. Read through them but will re-read. I need to spend a day going through all comments they are so good and important!
Well said -- "The equitable distribution of AI-driven benefits will be crucial to avoiding social and political upheaval, as technological shifts have historically caused significant disruptions to labor markets and societal cohesion." This is the big issue. Wherever you are on the barometer - optimist, pessimist or dystopian - change will happen and it needs an equitable distribution.
Incidentally Tyler Cowen is very optimistic but his numbers are 1% increase in US GDP. Per his conversation with Dwarkesh Patel
Yes, we need those, but first, I am still waiting for a GenAI application that I can deploy to the users and improve the business workflow so it can just assist them.
As always, Colin, a thought provoking post. I agree that, "the gradual nature of this process makes it difficult to resist", and thus challenging to perceive when, and how, and what the final impact is. I concur that there will be no single moment of crisis. It is challenging to be cognitively informed and aware , and yet, still challenged on what to do about it. I extensively use AI, but, my concern that there will be a moment when, "we wake up to find that our influence is a historical footnote" is increasing. As per your suggestion, I will read the paper, later today. I am halfway through the newly released book "SuperAgency" (Reid Hoffman, Greg Beato), and look forward to your post on it. I'm both concurring and disagreeing with their analysis, but congitive dissonance is my norm in regards to AI.
Thank you. The material is all in the great paper, distilling tsuch information is what I enjoy.
I'm currently reading a book by Christopher Clark, The Sleepwalkers: How Europe Went to War in 1914. It is the same over and over again. The signs were obvious in 1939, and they were in February 2022. But the vast majority of us fail to try to ward of the threat of major disruption. As you say it is such a challenge. My deepest concern is that a cognitive decline (and of course job losses) happens long before action is taken.
It is right, in my view, to extensively use AI, yet, also still build our cognitive muscles, by reading, educating, learning as you show.
"SuperAgency" is a great title - we need to build our agency. I note that Sam Altman said we should try to do things not 10x better but 100x better. That is a bit extreme, but we must strive to do that bit more, especially, in my mind, more thinking.
AI probably has cognitive dissonance about us too.
Pretty sure the paper itself is great (I'll get around to reading it soon) but you also provide a lot of value by thoughtfully breaking it down for us.
And I think cognitive decline is already all around us - response to these game-changing developments is always a step behind. Schools and teachers and parents all have to take time to orient and review how they teach kids to live with technology that's already in their hands. There's a whole generation up and coming that has been using devices as a crutch... I guess we just have to keep on calling them back to human thinking, human action, and trust they'll come.
Sadly I agree with you "cognitive decline is already all around us". The reverse Flynn effect, which I have written about a few times, shows this. It is contentious because of how "IQ" is measured, but like you write in your essay, we settle for AI output. I have worked with CEO's who have presented data from AI and admitted to not checking the content thoroughly, relying on AI and "becoming lazy."
We have to use a level of urgency to reverse the cognitive decline and as you say keep calling them back to human thinking. Trust is a key word.
In an essay, I described AI in terms of the famous "emperor and the chessboard" thought experiment. When the emperor asked the inventor what he wanted as an award for inventing the game of chess, the inventor suggested a grain of rice, doubling for each space on the chessboard.
This is a deceptively large ask, essentially handing all the food the kingdom could ever produce to the inventor. In effect, making the inventor the emperor. AI is a lot like this, the advancements in computing over the last 55 years were important, but these next few years make all the difference in terms of real outcomes.
The question is, will we retain our position as the "master" or emperor, or will we inadvertently cede our power to the invention due to the rapidity of change?
That is a great analogy. I recall it from Ray Kurzweil in the The Law of Accelerating Returns. Excellent use of it with AI.
You are absolutely right JK, the next few years are so important and that's why the more of us that step up the.broader the gains and progress.
We cede too much to others in many ways, and now its time to have many more emperors enjoying and spreading the joys of the harvest.
Thanks for the reminder about the emperor and the chessboard - what is the link to your essay please?
https://www.lianeon.org/p/an-ai-empire
Another fine and thought-provoking article, thank you.
"... decline of human thinking and agency" seems to accompany various cases of technology being implemented in the work-place. To get people off the land and to go and work in the driven-by-the-clock factories of the Industrial Revolution was certainly a significant loss of agency. As was Fordism in the 1920s with production-line jobs; only made possible by Henry Ford doubling their wages - otherwise no-one was interested in such soul-destroying jobs, nor the loss of their agency/autonomy.
The growing use of credit/debit cards over the last 75 years is another example of the relentless 'war of attrition' on human agency ... all part of a generational switch of thinking in preparation for the acceptance of digital-only money. Well, maybe it's not a 'war of attrition' but merely this is what happens when technology meets human affairs. I agree it's a slow process but it's relentless and seems to move in one direction. With AI the stakes are high.
My guide on these matters is often "The Riddle of Amish Culture" (by Donald Kraybill, 1989). There is a resilient largely independent community who sign up to certain beliefs and principles, and who respect their elders who decide in what way they will engage with various technologies. I am intrigued to know how they will 'gate-keep' AI.
That's a good example Joshua, the decline of human agency and your examples like the Industrial Revolution and Fordism. It really highlights how this isn't a new phenomenon, but rather a recurring pattern throughout history. I would add schools, with their 19th century arrangements and teaching, seem to devour agency.
Your point about the shift towards digital currency is also correct. It's easy to overlook how these seemingly small 'changes' can contribute to a larger erosion of autonomy.
The Amish example is fascinating and offers a unique perspective on how to thoughtfully engage with technology. I agree and likewise am curious about how they'll approach AI. It's a question worth exploring further. I have a friend who spent time over the Christmas period with an Amish family and community, I will ask him.
I am intrigued to know what your friend learnt from his time with the Amish.
I’ll respond later today when I have more time, as this is another topic I’ve spent considerable effort thinking about. Like yesterday, I’ll likely need a long walk to organize my thoughts. In the meantime, if you haven’t visited the following website, it’s one of the best starting points for understanding AI risk. I plan to use ChatGPT to analyze their document and include a few items in my response:
https://airisk.mit.edu/
My thought process, which I’ve mentioned in a previous comment on decision-making regarding risks, has always been to identify all possible worst-case scenarios and focus on either avoiding them or preparing for actions to take if they occur. I approach this issue in the same way. As I’ve said, everything will eventually happen if it is something doesn’t violate physical laws. The timing may vary significantly, but it will happen, regardless of whether it’s the right thing to do. Even if it’s the worst thing for society, someone will attempt it sooner or later.
This is why the “Five Laws of Human Stupidity” are so vital to look at how some of us operate(I am not trying to be harsh here but just stating our species tendency):
1. Always and inevitably, everyone underestimates the number of stupid individuals in circulation.
2. The probability that a person is stupid is independent of any other characteristic of that person.
3. A stupid person is someone who causes harm to another person or group while deriving no benefit to themselves, and might even suffer losses.
4. Non-stupid people always underestimate the damaging power of stupid individuals.
5. A stupid person is the most dangerous type of person.
To this, I would add a sixth law, which I’ve described above:
Even if something is destructive for most of society, if not all, minus a small group, someone will still attempt it simply because it’s physically possible if it benefits them. This 6th principle applies to many things, and if we end up getting into a worst-case scenario because of the unbridled pursuit that we should have growth at all costs, we will make this 6th law accurate.
Although I don’t consider myself a genius, I generally operate with the mindset in the following quote for specific topics, with AI being one of them. I see tremendous potential benefits if we manage to control AI, but I’m also deeply concerned about the costs we could face if even one worst-case scenario becomes reality. As history shows, control is often elusive.
As F. Scott Fitzgerald wrote:
“The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind simultaneously and still retain the ability to function.”
More to come later today.
I know Peter Slattery, the lead behind this - https://airisk.mit.edu/ I was hoping he would teach it on my program but he got overwhelmed when they released version 1.
I use the excel of risks for the work on the code of practice at the EU AI Act.
That quote is fantastic... and the five laws too.
I just started re-reading Christopher Clark's The Sleepwalkers: How Europe Went to War in 1914. It is also clear about how easily many ignore the signs and then are led like proverbial lemmings off the cliff!
I will go for a walk now too:-)
If you have not read the following note, to which I also added a counterargument in the reply from another author, let me know your thoughts when you get a chance.
https://substack.com/@microexcellence/note/c-89908219
Walking is great. I shared with someone the comment about why we do not learn from history and he said how did you write it? Here is my answer:
1. Walked for 45 minutes and thought about this topic with others.
2. Came back to the office and started writing, and the draft was ready in about 30 minutes
3. Use Grammarly to proofread and refine for 15 minutes
4. And then posted.
😄
This was longer than the length of a comment is allowed, so splitting it into two parts:
Part 1:
This is an interesting paper indeed!
"This required a much longer walk and more time to write than I initially anticipated, but I believe organizing my thoughts and putting them into words was worth the effort. I hope you'll find the same value in reading it as I did in writing it."
The Future of Humanity in an AI-Driven World
As I mentioned in my earlier thoughts, humanity has a track record of achieving anything that does not violate the laws of physics, even if it spans multiple generations. When we ask, "Can we build an intelligence greater than our own?" the answer lies not in "if" but in "when." As far as we know, no physical laws prevent us from doing so. Therefore, it is only a matter of time and effort.
The question then becomes: What will the world look like once we cross this threshold?
The Journey to Building Superior Intelligence
I firmly believe that most of our work today will eventually be automated. There's nothing inherently irreplaceable about most human tasks that a sufficiently advanced system with the right blend of intelligence, dexterity, and tacit knowledge cannot replicate. The challenge currently lies in equipping machines with tacit knowledge — the unspoken, intuitive understanding humans acquire through experience. This includes solving edge cases, developing sensors akin to human senses like touch and sight, and building common sense and intuition into machine intelligence.
But history is on our side. Evolution has already achieved this, albeit over millions of years. We tend to find a way once humans recognize that something is possible. History is full of examples where perceived "impossibilities" were shattered. Take the famous case of the four-minute mile: Before May 6, 1954, running a mile under four minutes was considered impossible. Yet, when Roger Bannister broke that barrier, the floodgates opened. Within 46 days, John Landy surpassed Bannister's record, and just a year later, three runners broke the four-minute barrier in a single race. What changed? The belief that it could be done.
We'll likely see a similar progression in building intelligence. Progress will be incremental, but each breakthrough will reinforce the belief that achieving superhuman intelligence is possible. Eventually, we'll cross the threshold — not through a sudden leap but through steady, collective effort. Whether it happens with today's AI models, next-generation architectures, or entirely new paradigms will be a minor detail in the cosmic scale of things as long as humanity survives long enough to see it through.
Humanity's Natural Path: Building Tools to Surpass Ourselves
Landing at the top of the intelligence pyramid was not an evolutionary mistake. If humanity hadn't done it, another species likely would have, though it may have taken longer. From dawn, we've built tools to augment our abilities and overcome limitations — from stone tools to computers. AI represents the next phase in this journey.
Today, we're approaching the limits of what our biological brains can achieve. The challenges humanity faces in the future will require capabilities far beyond the processing, memory, and problem-solving power of all human brains combined. This makes it evident that next-generation tools — including AI, synthetic biology, quantum computing, and more — will be necessary to help us overcome these challenges, improve human life, and extend our capabilities.
Imagine a future where AI and synthetic biology enable us to run a mile in under three minutes, make us superhuman, or extend our lifespans far beyond current limits. These possibilities are not fantasies but logical extensions of humanity's drive to push boundaries. However, with significant progress comes great responsibility: we must also consider these advancements' societal, ethical, and existential implications.
Human Purpose in the Post-Work Era
One of the most profound questions we'll face is how humanity will find meaning, purpose, and happiness in a world where most work is automated. For centuries, work has been central to our sense of purpose. Yet, hunter-gatherer societies provide an interesting counterpoint: they worked only a few hours a day to secure food and spent the rest of their time in leisure. If AI and robots take over most of our work, humanity may return to a similar lifestyle, where a few hours of effort suffice, leaving the rest of the day for leisure, creativity, and self-fulfillment.
However, the transition will not be without challenges. Below, I outline potential scenarios for humanity's future in an AI-dominated world:
1. Optimistic Scenarios
These scenarios envision a future where humankind thrives alongside AI:
a) Universal Basic Income (UBI) and a Post-Work Society: Governments implement UBI to ensure everyone's basic needs are met, regardless of automation.
Lifestyle Impact:
- People focus on creative, intellectual, and personal development activities.
- Society values non-economic contributions like art, science, caregiving, and community
building.
- Work becomes optional, and individuals pursue passions without financial pressure.
Key Challenges: Requires wealth redistribution, political will, and global cooperation.
b) Flourishing Through Creativity and Leisure: With AI handling mundane tasks, humanity can focus on arts, philosophy, and innovation.
Lifestyle Impact:
- Growth in fields like art, philosophy, and entrepreneurship.
- People spend more time with family, traveling, or engaging in hobbies.
- Increased emphasis on mental and physical well-being.
Key Challenges: Ensuring equitable access to resources and opportunities for self-fulfillment.
c) Collaborative Human-AI Societies: AI augments human abilities rather than replacing them entirely.
Lifestyle Impact:
- Humans focus on strategic, emotional, and interpersonal tasks, while AI handles technical
or repetitive work.
- New hybrid professions emerge where humans and AI collaborate.
- Education systems adapt to teach people how to work effectively with AI.
Key Challenges: Need for continuous upskilling and addressing disparities in access to AI technologies.
2. Neutral Scenarios
These scenarios involve a mix of progress and inequality:
a) Stratified Society: A small elite controls AI systems and their wealth, while the majority live modestly on subsidies like UBI.
Lifestyle Impact:
- Increased participation in community service, environmental conservation, and other
altruistic efforts.
- Social bonds strengthen as people engage in collective activities.
- Work as a concept redefines itself as a means of social contribution rather than economic
survival.
Key Challenges: Ensuring people find purpose and fulfillment without traditional jobs.
b) Shift to Volunteerism: With financial needs met, people engage in unpaid work like community service or environmental conservation.
Lifestyle Impact:
- A significant portion of the population lives in poverty or precarity.
- Mass unemployment leads to social unrest and mental health crises.
- Wealth becomes concentrated in the hands of a few who own AI systems.
Key Challenges: Preventing societal collapse and addressing systemic inequality.
3. Pessimistic Scenarios
These scenarios reflect potential dystopian outcomes:
a) Widespread Unemployment and Poverty: Mass unemployment leads to societal unrest and mental health crises without proper safety nets.
Lifestyle Impact:
- A significant portion of the population lives in poverty or precarity.
- Mass unemployment leads to social unrest and mental health crises.
- Wealth becomes concentrated in the hands of a few who own AI systems.
Key Challenges: Preventing societal collapse and addressing systemic inequality.
b) Loss of Purpose and Identity: In a work-free world, humans struggle to find meaning, leading to escapism, depression, or social fragmentation.
Lifestyle Impact:
- Many people feel disconnected, purposeless, or depressed.
- Societal values shift away from productivity, but this transition is difficult for individuals
accustomed to work-centric lives.
- Increased reliance on escapism (e.g., virtual reality, entertainment) to cope with existential
crises.
Key Challenges: Redefining purpose and fostering mental well-being.
c) Authoritarian Control: AI systems are used for surveillance, propaganda, and control, suppressing individuality and creativity.
Lifestyle Impact:
- Surveillance and AI-driven social credit systems restrict freedoms.
- Citizens rely heavily on state or corporate assistance, with little autonomy.
- Creativity and individuality may be suppressed in favor of conformity.
Key Challenges: Preventing totalitarian regimes and ensuring human rights.
4. Mixed Scenarios
The future is unlikely to be uniform, with regions or groups experiencing different outcomes:
Hybrid Societies: Some nations adopt optimistic policies, while others fall into dystopian patterns.
Global Inequality: Wealthy nations thrive with AI, while poorer nations struggle to adapt.
Cultural Shifts: Societies may prioritize valuing relationships, spirituality, and sustainability over material wealth.
Part 2:
The Call to Action
Regardless of the scenario, one thing is clear: As you may have heard in the Star Trek franchise, resistance is futile. Fighting against AI's development is not only impractical but counterproductive. Instead, we must focus on governance, awareness, and cooperation to shape a society that can adapt to massive change.
Unfortunately, decisions about AI are being made by a small group with little external oversight, potentially risking humanity's future. There is no global forum where billions of people or a larger subset of billions can have a say in the decisions that will impact everyone. This lack of governance is dangerous, not just for worst-case scenarios like an uncontrollable AGI but also for the concentration of power in the hands of a few.
If humanity is to thrive in the age of AI, we must unite to build a more equitable society. This requires collaboration on a global scale, transparent decision-making, and a shared vision of the future. While we cannot predict which scenario will ultimately unfold, we can strive to ensure it maximizes the collective good.
As humans, we've shown time and time again that we can achieve extraordinary things when we set our minds to it. Now is the time to focus on ensuring that AI becomes a tool for the betterment of all humanity.
The below will be an interesting read along with the above:
https://substack.com/home/post/p-155978999
Thank you - I agree, insightful read - I recommend this one too: "The future belongs to people whose work cannot be easily reduced to a dataset, and who can use AI to become even better at what they do." https://pradyuprasad.com/writings/how-to-have-a-career-even-when-o3-drops/
Will carefully digest your notes above. Read through them but will re-read. I need to spend a day going through all comments they are so good and important!
https://substack.com/@microexcellence/note/c-90226444
Well said -- "The equitable distribution of AI-driven benefits will be crucial to avoiding social and political upheaval, as technological shifts have historically caused significant disruptions to labor markets and societal cohesion." This is the big issue. Wherever you are on the barometer - optimist, pessimist or dystopian - change will happen and it needs an equitable distribution.
Incidentally Tyler Cowen is very optimistic but his numbers are 1% increase in US GDP. Per his conversation with Dwarkesh Patel
By the way - take a look at the new YC list - what they are looking for. a theme = focus on full automation, not assistance
https://www.ycombinator.com/rfs
And here you go another day, another model claiming to beat everyone else on a specific task:
https://substack.com/profile/87662407-marginal-gains/note/c-90010335
And here is an example of the last mile problems (edge cases) that needs to be solved before full automation:
https://substack.com/profile/87662407-marginal-gains/note/c-90013697
Yes, we need those, but first, I am still waiting for a GenAI application that I can deploy to the users and improve the business workflow so it can just assist them.