I found this to be an interesting and engaging article summarising the state of play with AI, and with an emphasis on where the rubber hits the road - ie: the world of work, and the knock-on effects of majorly changing that world of work.
As I've said in earlier comments, Technology is a form of power (like the emerging steam engine in its time). The current power-structures are seriously asymmetrical in relation to the populace they are suppose to serve and govern, and that imbalance of power worsening. So the likely outcome will be 'more of the same', with current power-brokers doing their best to appropriate the power of AI for whatever are their agendas.
However, I believe there are a sufficient number of savvy people amongst the masses who will find ways the use AI in a life-enhancing way to maybe, just maybe, create localised decentralised economies where the old-order top-down structures simply become irrelevant.
Thank you Joshua. Technology, particularly one as potent as AI, is indeed a form of power. Your concern that existing power structures might seek to co-opt AI to further their agendas, potentially exacerbating current asymmetries, is a very real one and reflects some of the underlying anxieties discussed in the National Academies' report. The report itself underscores that "The outcome for worker and societal welfare necessarily depends on the legal, regulatory, and bargaining regimes in place," and without well-functioning institutions, "market outcomes" might not be "socially desirable ones." This speaks directly to your point about the current imbalance of power.
However, I share your cautious optimism regarding the potential for grassroots adoption. The idea of "savvy people amongst the masses" finding life-enhancing ways to use AI, potentially fostering more localized and decentralized economies, is compelling. This aligns with the more hopeful "Scenario 3" which I mentioned, where AI could lead to a "reinstatement of the value of mass expertise in new domains" through augmentation. The report’s emphasis on "Opportunities to Influence" and the call for broad societal engagement, from policymakers to individual workers , suggests that the path of AI is not entirely predetermined by top-down forces.
Your vision of AI contributing to making "old-order top-down structures simply become irrelevant" is powerful. Even as we remain vigilant about the risks. Achieving that positive outcome will likely depend on many of the factors the report highlights, including access, education, and conscious policy choices that aim to empower individuals rather than just centralize control.
We have patchwork, the network state, and the Dark Enlightenment now. I only heard of the Dark Enlightenment last month, and I have downloaded the book, but have not read it yet. In case you do not know:
The Dark Enlightenment, also called the neo-reactionary movement or neoreactionarism, is an anti-democratic, anti-egalitarian, and reactionary philosophical and political movement. A reaction against Enlightenment values, it favors a return to traditional societal constructs and forms of government such as absolute monarchism and cameralism.
I also commented on an idea where nomads who work remotely around the world form a community similar to one of the above in different countries by buying land and other rights. I also got a partial response from the author. They are relying too much on the idea that money will buy them security, over the fact that there is always more than money, which explains why people behave in a certain way. I will share it here once I find it.
"They are relying too much on the idea that money will buy them security, over the fact that there is always more than money, which explains why people behave in a certain way. "
Yes, the best security I believe is to have a patch of land, your own water supply, and grow your own food. Oh, and a debt-free roof over your own head, and someone to love and share it all with. Beyond that, building 'social capital' in local community help-each-other activities.
Yes, I've heard of the Dark Enlightenment and neoreactionarism, AKA NRx. There's quite a lot of discussion about it on substack - and it's originator "Mencius Moldbug", AKA Curtis Yarvin. He's the pseudo-intellectual mentor of Silicon Valley's elite - including MuskRat himself. The most disgusting excuses for human beings to ever walk the Earth.
I'd not heard of Balaji and "The Network State. I was actually thinking of Buckminster Fuller and a comment he apparently made concerning that rather than 'fight the system', better to be inventive (like he was very much so) and invent/develop new ideas/ways/structures that make the old structures and ways of thinking irrelevant.
In this way, small-scale inventors/inventions are the way to go - and now with the amplification possible via the internet & AI, 'small scale' can more easily be 'scaled up'. I think Fritz Schumacher's "Small is Beautiful" related to these ideas from a different angle, approaching it via small-scale localised technologies that empower local communities. This hit home to me recently with the power-outage in the whole of Spain & Portugal 10 days ago.
Menace or paradise? Neither nor. In the very long run there will be another big thing after AI, perhaps in 25, 50 or 100 years.
The discussion reminds me a little of the early years of tv. Edward Murrow stated, it is our task to make tv clever or just leave it like it is a dump box with lights and cables. This Chianti’s offer chances. I’d like to put forward to theses: 1. There is no useful collective prediction or development. It all dependents - locally, regarding different groups, educational levels, fields of expertise. 2. Education could and should change dramatically - with or without AI, especially with AI. We should have better, decentralized education, with far greater variety, specialization and competitive approaches. That includes different personnel and different educational careers.
Finally, analytics as a prerequisite for AI will become even more important.
I like the thought process Michael, thank you. That is an excellent analogy to the early days of television; Edward Murrow's challenge to make TV "clever" rather than a "dump box" which aligns well, although I think narrowly, with the choices society faces with AI now. A central theme from the National Academies' report is that the future of AI is not predetermined but will be shaped by the decisions and efforts we make.
Responding to your theses:
1. I largely agree with this. The National Academies' report itself emphasizes the "great deal of uncertainty" and "wide error bands" in forecasting AI's precise trajectory. I think that AI's adoption and impact will be heterogeneous, varying significantly across different economic sectors, firms, and geographical regions, and certainly impacting different demographic groups and skill levels in distinct ways. The report also highlights that outcomes aren't just about the technology but are deeply intertwined with "demographic, social, institutional, and political forces," which are inherently local and varied. So, a one-size-fits-all prediction is unlikely to be accurate or useful; understanding these localized dynamics will be key.
2. This is a powerful point and one that strongly aligns with the findings of the report and my thinking. There's a clear consensus that education systems require significant transformation. AI will not only drive new demands for skills, necessitating lifelong learning and shifts in what is taught, but can also be a tool to deliver education in more personalized, adaptive, and engaging ways. Your call for "better, decentralized education, with far greater variety, specialization and competitive approaches" aligns well with the potential for AI to help tailor learning pathways to individual needs and emerging fields, moving away from standardized models. This could open up new types of educational careers and methodologies focused on cultivating critical thinking, adaptability, and human-AI collaboration. We must do better with education.
Finally, your observation that "analytics as a prerequisite for AI will become even more important" is perefect. AI systems are fundamentally data-driven, and the ability to gather, interpret, and act on data analytics is foundational to developing, deploying, and even understanding the impact of AI. The report's own emphasis on the need for better measurement and data infrastructure to track AI's effects reinforces the centrality of strong analytical capabilities across the board. I linked to Anthropic's example in the post.
Thanks again for adding these thoughtful points to the discussion.
Thank you very much indeed - for your Kind and insighful remarks. Let me state one axiom of prognosis: It is your impossible to predict what is going to happen that lies further in the future than 2 years. Pattern predictions can go further and beyond, of course.
The way I see the current status of analysis, simplified: We lack data literacy as much as true analytics in a systemic way, causal and dynamic. I suppose the latter is a task too difficult for AI for the foreseeable future - at least on a very advanced level. And there we are again, Mr Kahneman and Mr. Gigerenzer 😀🤟
I would argue that while the loom did not kill the artisan weaver, it did not make the weaver an operator either. They were still artisans. What changed was the advent of the automated, production line looms which didn't require an artisan weaver at all, but anyone who could learn to operate the machinery from computer generated artwork, which now does not even require an artist to create the artwork. Everything is computer-generated, heading toward to total dependence on AGI.
You are absolutely right to point out the distinct stages of technological impact. The initial mechanical looms, while increasing productivity, still required significant artisan skill. It was indeed the later, more automated production-line looms, and now computer-generated design and control, that led to a more profound shift in the required human input, moving away from deep craft towards operation, and in some cases, towards near-full automation of the creative process.
This detailed perspective is highly relevant to our current discussions about AI. Your point highlights a crucial pattern:
Early versions of a technology might augment existing artisans or experts.
More advanced versions automate more of the process, shifting the human role towards operation or management of the technology.
Eventually, as you suggest with AI, the technology could become capable of handling nearly all aspects, from creation to execution, leading to a much deeper displacement of traditional human skills and potentially a near-total dependence on advanced AI or AGI.
This progression from tool-assisted artisan to operator of automated systems, and potentially to a system where even the design input is AI-generated, mirrors the fears many have about the trajectory of AI in knowledge work. It’s not just about individual tasks being automated, but potentially entire chains of expertise and creative endeavor.
The National Academies' report touches on this when it discusses the "erosion of expertise" and the possibility that AI might "substitute for human expertise, eroding the value of such expertise." Your example with the loom illustrates that this erosion can be a gradual process with distinct phases, each having different implications for the workforce.
The concern now is that AI, particularly with the speed of development in generative models and the push towards AGI, could compress these phases or take them to an unprecedented level, as you've described.
It underscores the importance of understanding not just what AI can do today, but the potential direction of its development and the choices society makes about how deeply integrated and autonomous these systems become in various fields.
The real danger starts in your own profession, Colin. AI is already being used by students to write papers etc. Taken to its logical conclusion there will eventually be no need to learn anything or to use critical thinking because AI will do it for you, and robotics will step in to do the heavy lifting. The Tech Bros. talk up all this leisure time we will have but who is going to pay for that? Not them, surely. They are the biggest tax avoiders as it is, so it's unlikely they will suddenly develop a conscience and take pity on the growing welfare state, and with no worker bees paying taxes from their bygone jobs who will pay for UBI? Of course, Musk's answer is to ship us all off to Mars and develop planet earth into a luxury resort for the 1% - he hasn't specifically talked about the latter part, but I suspect that is the goal.
I agree, and am concerned about AI's use in education and the potential downstream effects on critical thinking and learning. The list of job disruption is massive, I have no doubt. I've always struggled with that point on UBI, how will governments be able to npay it, who will be able to afford anything? There are many good papers and articles on this.
... I am not going to Mars :-) But of course that could be his plan - although Larry Page co-founder of Google told Musk he is too much of a species lover.
The threat of disruption is real and mass unemployment. The loss of personal agency and thinking. So we must become high agency people ... they will thrive.
An excellent overview of the situation and potential paths forward. Thanks for sharing this summary, Colin.
Personally, I’m quite confident that the third path is the most likely outcome. It aligns with what we’ve done historically, using technology to amplify human capability rather than replace it entirely.
That said, I expect the transition to be uneven. Some individuals will be left behind, while others will find ways to harness these tools to enhance their lives and careers. Demographics will likely play a significant role in determining who advances and who doesn’t. While it's not a simple generational divide, older individuals may struggle to adapt, whereas younger ones will integrate AI more naturally, just as they did with smartphones and internet. Education can help shape engagement, but ultimately, I think our environment will have a greater influence. Children born today will grow up in a world where AI simply is part of the fabric of daily life.
Perhaps, in many ways, our concerns reflect a crisis of worldview more than one of technology. It’s the older generations, who are forced to let go of established frameworks and adopt entirely new ways of thinking. That may be why we’re talking about it so passionately ourselves, while many in the 20-somethings are already busy building businesses with AI at their core.
I spent a few hours this weekend listening to the Y Combinator podcast and was struck by how many young founders, especially from countries with younger populations like India and parts of Africa, are using AI to build the future. Perhaps the West feels more anxiety about AI because of our aging demographics.
The most exciting part? While the big tech players are dominating obvious use cases like customer service bots, search enhancement, and productivity tools, there are still countless opportunities to develop niche, vertical solutions. Small teams can now empower small and medium-sized businesses in ways that were never possible before.
In the end, I don’t think humans will be sidelined by AI. The greater risks may lie elsewhere - in shifting demographics, the unraveling of long-standing institutions like marriage and family, and deeper questions of rights, responsibilities, and social cohesion. Perhaps AI isn’t the crisis, but the distraction. While we fixate on its potential threats, we may be overlooking the more pressing issues that are quietly reshaping our world. Ironically, if we shift our focus, we might discover that AI has arrived just in time - not as the problem, but as a tool to help us address the challenges we’ve long ignored.
Thank you Susan for a refreshingly optimistic take on AI's potential trajectory and our societal response to it. I appreciate you laying out your confidence in the 'third path' and your nuanced view on the transition.
Your point about historical precedent, where technology has often augmented rather than entirely replaced human effort, is a valuable one. It aligns with the more hopeful Scenario 3, which envisions a "reinstatement of the value of mass expertise in new domains" through human-AI collaboration. The report itself, while cautious, does highlight the potential for AI to "enable workers to use their expertise more effectively."
The idea that younger generations, growing up with AI as a native part of their environment, will integrate it more naturally is quite plausible, similar to previous technological adoptions. This also connects to the report's emphasis on the need for education to adapt and foster lifelong learning to help all demographics navigate these shifts. Your observation that our current concerns might reflect a "crisis of worldview" for those accustomed to established frameworks, more than just a technological crisis, is a very insightful framing. The dynamism you see in younger founders, especially in regions with younger populations, certainly suggests a different lens through which AI is being approached.
The potential for small teams to leverage AI for niche, vertical solutions, empowering small and medium-sized businesses, is indeed an exciting prospect. This aligns with the democratizing potential of technology, which could counter some of the centralizing forces often associated with powerful new tools.
Your concluding thoughts are particularly provocative: the idea that AI might be a 'distraction' from more pressing, underlying societal issues, but also, paradoxically, a tool that could help us address those very challenges if we shift our focus. This turns the common narrative on its head. The National Academies' report does urge us to shape AI "in ways that are consistent with societal values and goals," and perhaps identifying and tackling those deeper societal issues you mention is a key part of defining those goals. If AI can help us manage complexities in demographics, institutional reform, or even social cohesion, then its arrival could indeed be timely in unforeseen ways.
It’s a perspective that encourages looking beyond the immediate anxieties towards a more integrated and potentially constructive role for AI in broader societal evolution.
I have asked myself this same question over and over. If AI can write a (better) article than I can on human progress, why do I spend many hours doing so myself?
My answer is that I fear expertise erosion, which is the crux of this article. I return to the old quote from George Orwell: “If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.”
We mustn’t allow others to think for us, not even AI. Tools should always augment and expand our skills, not be allowed to diminish or erode them.
Thank you JK. That question, 'If AI can write a (better) article than I can... why do I spend many hours doing so myself?' is one that I suspect many creators, thinkers, and professionals are grappling with in very real terms.
Your answer, pointing to the fear of 'expertise erosion,' precisely identifies the core concern that the National Academies' report, and my post, attempts to show. The Orwell quote is incredibly apt and serves as a stark warning: the act of thinking, articulating, and creating is intrinsically linked to our cognitive abilities and our agency. To abdicate that process is indeed to risk having our thinking done for us. My biggest fear for society!
I wholeheartedly agree with your concluding sentiment. This is a critical distinction. The National Academies' report suggests that society has a choice in how AI is developed and deployed, whether it's primarily for automation and substitution, or for augmentation and human-AI collaboration.
The challenge, as your comment so clearly articulates, is ensuring we make choices that foster the latter. It’s about cultivating a relationship with these powerful new tools where they serve as catalysts for deeper human thinking and enhanced capabilities, rather than as replacements that lead to atrophy. Continuing to engage in the hard work of writing, thinking, and creating, even when AI offers a seemingly easier path, might be one of the most important ways we preserve and strengthen those uniquely human capacities.
Underlying all of the speculation about where AI will take us, are the power structures that exist in society. We're ruled (mostly) by psychopaths, whose primary interest is total domination of resources. While they're concentrated in silicon valley, there are also the more established fossil fuel robber barons - the Koch brothers, now just Charles come to mind - as well as billionaire moguls in other industries, such as big pharma, big agra, and of course, Wall St.
They influence political leaders through various PAC's making massive campaign donations, and getting away with it thanks to the Supreme Court ruling in the infamous "Citizens United" case. They possess depraved indifference to human life. Hence they don't concern themselves with "trivial" issues such as the definition of work and human purpose. To them, those of us who aren't "members of the club" are mere insects to be exterminated as soon as possible.
Ironically, they're writing their own epitaphs right along with ours, as the problems with burgeoning AGI begin to emerge and converge with all of the other existential threats these oligarchs are causing - from global warming to antibiotic [strikeout] impervious [/strikeout] resistant bacteria, to the next viral pandemic, etc. All exacerbated by a growing authoritarian streak around the world, but especially here in the U.S.
Our "president" is being influenced by the likes of MuskRat, Zuck, Bozo Bezos, Altman, Andreessen, Theil, who in turn are influenced by the likes of pseudo-intellectual Mencius Moldbug, AKA, Curtis Yarvin.
My apologies for sounding pessimistic, but this is what I see happening before my eyes.
Yes, that is a critical and troubling dimension, the undeniable influence of existing power structures and the potential for AI to be shaped by narrow interests rather than broader societal well-being.
The report, while more measured in its language, certainly doesn't shy away from the idea that the future of AI and work is not solely a technological question. It explicitly states that outcomes will depend heavily on "demographic, social, institutional, and political forces" and that "Absent well-functioning institutions, the committee does not presume that market outcomes will be socially desirable ones." This aligns with your core point that the agendas of those who wield significant economic and political power will play a major role.
The report's call for robust "legal, regulatory, and bargaining regimes" can be seen as an acknowledgment of the need for checks and balances against the kind of unchecked influence you describe. Issues like the erosion of expertise, worker surveillance, and deskilling, which I wrote about become even more acute if the development and deployment of AI are driven primarily by motives that deprioritize human well-being and equitable outcomes.
While the report attempts to outline "Opportunities to Influence" and pathways towards a more human-centric AI future, your comment serves as a stark reminder of the immense pressures and systemic challenges that stand in the way of achieving such a vision. The convergence of AI with other existential threats, as you point out, only heightens the stakes and the urgency for broad societal vigilance and action.
It's understandable why you feel pessimistic given the current landscape. The challenges are indeed formidable, and ensuring that AI serves humanity broadly, rather than consolidating power or exacerbating existing problems, will require a level of collective will and effective governance that is, as you imply, currently under significant strain. We must be vocal to get a better future.
Indeed, we have a long road ahead of us. Where that road will ultimately lead to - a new paradigm for civilization or a cliff - will largely be determined by randomness, unfortunately.
In the spirit of the closing sentence in your reply to my comment:
It's a great reminder of the imperative that we continue to Rise! Resist! ✊✊✊
The next nationwide rally is June 14, yes >that< June 14, be there or be square!
Let's ruin Chump's B'day. We need 3.5% or the population, or around 12,000,000 people to be present. So bring all your friends and families, bring your pets. Spread the word as far and wide as possible!
AI is a labor force of polymaths at your fingertips, willing to work for pennies, able to operate at scale, and (as of now) incapable of boredom or complaint. They do not sleep. They do not unionize. They do not ask for meaning. They exist to serve. And that, paradoxically, dangerously, and thrillingly, is precisely what makes them revolutionary.
Since I have commented extensively on this topic in the last several months, I do not see much that I believe is new. However, your expansion of 3 scenarios is great. I like that it is more scenario-based and neutral, so we are not dealing with two extremes. Does it provide a probability of each scenario?
I think I will explore a few angles later today that may be covered in the full report. Since you were summarizing key areas, they may not have received more attention, but then again, we all focus on areas based on what is important to us.
First, a couple of questions: Your post states, “This scenario feels like the most probable extension of current trends, at least in the short to medium term.“
Is the short term above the next decade?
Is mid-term 2035 and beyond, and for the next few decades?
The growing presence of AI in the workforce raises several crucial questions about jobs, purpose, and societal stability. While much of the discussion centers on job loss, automation, and the potential implementation of Universal Basic Income (UBI), the issue is far more complex. To fully understand the implications, we should examine AI’s impact on work from three key angles: society, family, and the individual.
1. What Does Society Want from Work?
At the societal level, work is more than just a means of earning money—it is the backbone of stability and progress. Society depends on work to ensure that:
a) Individuals Are Paid Reasonably: A stable society requires that people earn enough to afford a decent standard of living. This reduces poverty, inequality, and social unrest.
b) People Stay Occupied and Productive: Work provides a sense of purpose and contribution, which keeps individuals engaged and prevents societal stagnation.
c) Generational Progress Continues: Society aims to educate the next generation and equip them with the tools to find meaningful work, fostering progress across generations.
2. What Does a Family Want from Work?
Work is a cornerstone of stability, security, and opportunity for families. Families depend on work to:
a) Meet Basic Needs: Work provides the income necessary for food, shelter, healthcare, and security.
b) Educate and Prepare Children: Parents strive to give their children the tools and opportunities to succeed.
c) Save for the Future: Families aim to build financial safety nets for emergencies, retirement, and long-term peace of mind.
d) Foster Happiness and Fulfillment: Beyond financial stability, families want their children to find meaningful work that aligns with their passions and values.
3. What Does an Individual Adult Want from Work?
For individuals, work is not just a source of income—it’s a source of identity, growth, and fulfillment. At the highest level, individuals look to work to:
a) Earn a Living: People need enough income to care for themselves and their loved ones.
b) Find Purpose and Meaning: Many seek work contributing to something valuable or impactful.
c) Build Expertise and Recognition: Key motivators include developing skills and being respected for contributions.
d) Grow and Learn: Work provides personal and professional development opportunities.
However, this is not universal. People’s aspirations vary:
- For Some, Work Is Survival: They work to meet basic needs, with little concern for fulfillment or growth.
- For Others, Work Is a Career: They see work as a path to advancement, achievement, and long-term goals.
- For a Few, Work Is a Calling: They pursue work that aligns deeply with their values and passions, finding profound meaning in their efforts.
I always thought of the job market as a three-legged stool, unlike media and government, which are primarily focused on the number of jobs, supported by:
1. The Number of Jobs: Are there enough jobs for everyone?
2. The Quality of Jobs: Do these jobs provide meaning, purpose, and satisfaction?
3. The Pay of Jobs: Do these jobs pay well enough to support individuals and families?
And if we apply AI and all the above, we will start seeing significant challenges that society, families, and individuals will face. AI threatens to destabilize all three legs of this stool:
- Quantity of Jobs: AI will create new jobs, but likely not at the scale required to replace the ones it eliminates.
- Quality of Jobs: Many AI-driven roles may lack the creativity and fulfillment that people value, reducing overall job satisfaction.
- Pay of Jobs: Automation could drive wages down in many sectors, exacerbating income inequality and financial insecurity.
I am not saying a large population will not be happy to get UBI so they can get away from the drudgery of their work. However, there will still be a significant population that will despise AI for taking away a lot from them unless something equally or more meaningful replaces it. The challenge is that we do not know what it is, and uncertainty in any human endeavor creates fear and resistance to change, and that’s what AI deployment at a large scale will face when it starts taking humans away from the driving seat at work.
Unfortunately, the above is not the whole story about significant challenges. AI’s impact on work raises deep societal and philosophical questions (I have written about the below under your posts and other people’s posts):
1. The Purpose of Education:
- If jobs are scarce, what is the purpose of education?
- Should education focus on fostering creativity, adaptability, and emotional intelligence rather than workforce preparation?
2. Wealth Redistribution:
- As AI increases productivity, how do we ensure the gains are distributed equitably?
- Developing countries, which often rely on labor-intensive industries, may face unique challenges as they compete with AI-driven economies.
3. Finding Purpose Beyond Work: How do we redefine these concepts if work is no longer a central source of purpose and identity?
- UBI may address financial needs but does not provide the fulfillment that meaningful work offers.
4. Global Inequality: Developing countries may struggle to implement UBI or compete in a world dominated by AI. How can we ensure they are not left behind?
I will write more later, as I have not talked much about the role of the government and the tech industry. I will end with a quote from Franklin D. Roosevelt:
"The test of our progress is not whether we add more to the abundance of those who have much; it is whether we provide enough for those who have too little."
You raise some interesting and rather complicated questions here. We would do well to prioritize the issues you raise and address the most pressing one first: income/wealth distribution.
I'm a big fan of a universal basic income (UBI). Once that's in place, the other issues will be much easier to contend with.
The vast majority or people don't find meaning in work. Indeed, just the opposite, most jobs vacuum the meaning out of life for most people. While it's true that those fortunate enough to be in a chosen profession find meaning in their work, they represent a minority.
With a UBI in place, people are free to decide what would give them meaning. Certainly education could - and should - be transformed into something more meaningful than mere preparation for a higher level job. This could be helpful for people to find meaning and purpose for themselves.
I always like to say the true measure of civilization isn't how powerful the powerful become but how effectively we empower the disempowered.
That great quote from FDR you end with reminds me of this.
I will reply in detail during the day - in the meantime, your quote from FDR beautifully encapsulates the ethical imperative. The progress AI brings should be measured by its ability to uplift all segments of society, particularly the most vulnerable. This requires proactive, thoughtful, and globally coordinated efforts from governments, industry, academia, and civil society.
Your concern that AI threatens all three legs of the job market stool, quantity, quality, and pay, is a central anxiety that the report grapples with.
The report notes uncertainty about the net effect on jobs but highlights historical precedents where technology has both displaced and created work. Your point that new jobs may not emerge at the required scale is a valid concern I share.
The "erosion of expertise" theme from the report and my thoughts, speaks directly to your concern about job quality and fulfillment. If AI automates engaging tasks or deskills roles, job satisfaction is indeed at risk. The report suggests intentional design choices can influence whether AI augments or diminishes job quality (Finding 7).
The report (Finding 8) acknowledges that "even if AI yields significantly higher worker productivity, the productivity gains might fall unevenly across the workforce and might not be reflected in broad-based wage growth," aligning with your fears about wage depression and exacerbated inequality.
Your point about UBI addressing financial needs but not necessarily the human need for purpose and meaning is critical. The resistance AI might face if it's perceived as merely "taking humans away from the driving seat at work" without offering equally or more meaningful alternatives is a significant hurdle.
I found this to be an interesting and engaging article summarising the state of play with AI, and with an emphasis on where the rubber hits the road - ie: the world of work, and the knock-on effects of majorly changing that world of work.
As I've said in earlier comments, Technology is a form of power (like the emerging steam engine in its time). The current power-structures are seriously asymmetrical in relation to the populace they are suppose to serve and govern, and that imbalance of power worsening. So the likely outcome will be 'more of the same', with current power-brokers doing their best to appropriate the power of AI for whatever are their agendas.
However, I believe there are a sufficient number of savvy people amongst the masses who will find ways the use AI in a life-enhancing way to maybe, just maybe, create localised decentralised economies where the old-order top-down structures simply become irrelevant.
Thank you Joshua. Technology, particularly one as potent as AI, is indeed a form of power. Your concern that existing power structures might seek to co-opt AI to further their agendas, potentially exacerbating current asymmetries, is a very real one and reflects some of the underlying anxieties discussed in the National Academies' report. The report itself underscores that "The outcome for worker and societal welfare necessarily depends on the legal, regulatory, and bargaining regimes in place," and without well-functioning institutions, "market outcomes" might not be "socially desirable ones." This speaks directly to your point about the current imbalance of power.
However, I share your cautious optimism regarding the potential for grassroots adoption. The idea of "savvy people amongst the masses" finding life-enhancing ways to use AI, potentially fostering more localized and decentralized economies, is compelling. This aligns with the more hopeful "Scenario 3" which I mentioned, where AI could lead to a "reinstatement of the value of mass expertise in new domains" through augmentation. The report’s emphasis on "Opportunities to Influence" and the call for broad societal engagement, from policymakers to individual workers , suggests that the path of AI is not entirely predetermined by top-down forces.
Your vision of AI contributing to making "old-order top-down structures simply become irrelevant" is powerful. Even as we remain vigilant about the risks. Achieving that positive outcome will likely depend on many of the factors the report highlights, including access, education, and conscious policy choices that aim to empower individuals rather than just centralize control.
That's assuming the old-order top-down structures don't succeed in making us simply become irrelevant - if not extinct.
We have patchwork, the network state, and the Dark Enlightenment now. I only heard of the Dark Enlightenment last month, and I have downloaded the book, but have not read it yet. In case you do not know:
The Dark Enlightenment, also called the neo-reactionary movement or neoreactionarism, is an anti-democratic, anti-egalitarian, and reactionary philosophical and political movement. A reaction against Enlightenment values, it favors a return to traditional societal constructs and forms of government such as absolute monarchism and cameralism.
I also commented on an idea where nomads who work remotely around the world form a community similar to one of the above in different countries by buying land and other rights. I also got a partial response from the author. They are relying too much on the idea that money will buy them security, over the fact that there is always more than money, which explains why people behave in a certain way. I will share it here once I find it.
"They are relying too much on the idea that money will buy them security, over the fact that there is always more than money, which explains why people behave in a certain way. "
Yes, the best security I believe is to have a patch of land, your own water supply, and grow your own food. Oh, and a debt-free roof over your own head, and someone to love and share it all with. Beyond that, building 'social capital' in local community help-each-other activities.
Yes, I've heard of the Dark Enlightenment and neoreactionarism, AKA NRx. There's quite a lot of discussion about it on substack - and it's originator "Mencius Moldbug", AKA Curtis Yarvin. He's the pseudo-intellectual mentor of Silicon Valley's elite - including MuskRat himself. The most disgusting excuses for human beings to ever walk the Earth.
Here is the book from Nick Land about the Dark Enlightenment: https://keithanyan.github.io/TheDarkEnlightenment.epub/TheDarkEnlightenment.pdf
Found it:
Post: https://www.elysian.press/p/digital-nomad-network-states?utm_campaign=posts-open-in-app&triedRedirect=true
My comments and the author’s response: https://www.elysian.press/p/digital-nomad-network-states/comment/109831528
Are you talking about something similar to the network state defined by Balaji in the following book, or something different?
https://tinyurl.com/2xnxfeey
That's certainly one of their ideas. Let's all hope it falls flat on its face.
Scenario 3 could be aligned with Balaji - but I have not bought into crypto yet
We are not ready for any of these other kinds of city-state structures. These ideas require a much more stable and mature world.
I'd not heard of Balaji and "The Network State. I was actually thinking of Buckminster Fuller and a comment he apparently made concerning that rather than 'fight the system', better to be inventive (like he was very much so) and invent/develop new ideas/ways/structures that make the old structures and ways of thinking irrelevant.
In this way, small-scale inventors/inventions are the way to go - and now with the amplification possible via the internet & AI, 'small scale' can more easily be 'scaled up'. I think Fritz Schumacher's "Small is Beautiful" related to these ideas from a different angle, approaching it via small-scale localised technologies that empower local communities. This hit home to me recently with the power-outage in the whole of Spain & Portugal 10 days ago.
Menace or paradise? Neither nor. In the very long run there will be another big thing after AI, perhaps in 25, 50 or 100 years.
The discussion reminds me a little of the early years of tv. Edward Murrow stated, it is our task to make tv clever or just leave it like it is a dump box with lights and cables. This Chianti’s offer chances. I’d like to put forward to theses: 1. There is no useful collective prediction or development. It all dependents - locally, regarding different groups, educational levels, fields of expertise. 2. Education could and should change dramatically - with or without AI, especially with AI. We should have better, decentralized education, with far greater variety, specialization and competitive approaches. That includes different personnel and different educational careers.
Finally, analytics as a prerequisite for AI will become even more important.
I like the thought process Michael, thank you. That is an excellent analogy to the early days of television; Edward Murrow's challenge to make TV "clever" rather than a "dump box" which aligns well, although I think narrowly, with the choices society faces with AI now. A central theme from the National Academies' report is that the future of AI is not predetermined but will be shaped by the decisions and efforts we make.
Responding to your theses:
1. I largely agree with this. The National Academies' report itself emphasizes the "great deal of uncertainty" and "wide error bands" in forecasting AI's precise trajectory. I think that AI's adoption and impact will be heterogeneous, varying significantly across different economic sectors, firms, and geographical regions, and certainly impacting different demographic groups and skill levels in distinct ways. The report also highlights that outcomes aren't just about the technology but are deeply intertwined with "demographic, social, institutional, and political forces," which are inherently local and varied. So, a one-size-fits-all prediction is unlikely to be accurate or useful; understanding these localized dynamics will be key.
2. This is a powerful point and one that strongly aligns with the findings of the report and my thinking. There's a clear consensus that education systems require significant transformation. AI will not only drive new demands for skills, necessitating lifelong learning and shifts in what is taught, but can also be a tool to deliver education in more personalized, adaptive, and engaging ways. Your call for "better, decentralized education, with far greater variety, specialization and competitive approaches" aligns well with the potential for AI to help tailor learning pathways to individual needs and emerging fields, moving away from standardized models. This could open up new types of educational careers and methodologies focused on cultivating critical thinking, adaptability, and human-AI collaboration. We must do better with education.
Finally, your observation that "analytics as a prerequisite for AI will become even more important" is perefect. AI systems are fundamentally data-driven, and the ability to gather, interpret, and act on data analytics is foundational to developing, deploying, and even understanding the impact of AI. The report's own emphasis on the need for better measurement and data infrastructure to track AI's effects reinforces the centrality of strong analytical capabilities across the board. I linked to Anthropic's example in the post.
Thanks again for adding these thoughtful points to the discussion.
Thank you very much indeed - for your Kind and insighful remarks. Let me state one axiom of prognosis: It is your impossible to predict what is going to happen that lies further in the future than 2 years. Pattern predictions can go further and beyond, of course.
The way I see the current status of analysis, simplified: We lack data literacy as much as true analytics in a systemic way, causal and dynamic. I suppose the latter is a task too difficult for AI for the foreseeable future - at least on a very advanced level. And there we are again, Mr Kahneman and Mr. Gigerenzer 😀🤟
I would argue that while the loom did not kill the artisan weaver, it did not make the weaver an operator either. They were still artisans. What changed was the advent of the automated, production line looms which didn't require an artisan weaver at all, but anyone who could learn to operate the machinery from computer generated artwork, which now does not even require an artist to create the artwork. Everything is computer-generated, heading toward to total dependence on AGI.
You are absolutely right to point out the distinct stages of technological impact. The initial mechanical looms, while increasing productivity, still required significant artisan skill. It was indeed the later, more automated production-line looms, and now computer-generated design and control, that led to a more profound shift in the required human input, moving away from deep craft towards operation, and in some cases, towards near-full automation of the creative process.
This detailed perspective is highly relevant to our current discussions about AI. Your point highlights a crucial pattern:
Early versions of a technology might augment existing artisans or experts.
More advanced versions automate more of the process, shifting the human role towards operation or management of the technology.
Eventually, as you suggest with AI, the technology could become capable of handling nearly all aspects, from creation to execution, leading to a much deeper displacement of traditional human skills and potentially a near-total dependence on advanced AI or AGI.
This progression from tool-assisted artisan to operator of automated systems, and potentially to a system where even the design input is AI-generated, mirrors the fears many have about the trajectory of AI in knowledge work. It’s not just about individual tasks being automated, but potentially entire chains of expertise and creative endeavor.
The National Academies' report touches on this when it discusses the "erosion of expertise" and the possibility that AI might "substitute for human expertise, eroding the value of such expertise." Your example with the loom illustrates that this erosion can be a gradual process with distinct phases, each having different implications for the workforce.
The concern now is that AI, particularly with the speed of development in generative models and the push towards AGI, could compress these phases or take them to an unprecedented level, as you've described.
It underscores the importance of understanding not just what AI can do today, but the potential direction of its development and the choices society makes about how deeply integrated and autonomous these systems become in various fields.
The real danger starts in your own profession, Colin. AI is already being used by students to write papers etc. Taken to its logical conclusion there will eventually be no need to learn anything or to use critical thinking because AI will do it for you, and robotics will step in to do the heavy lifting. The Tech Bros. talk up all this leisure time we will have but who is going to pay for that? Not them, surely. They are the biggest tax avoiders as it is, so it's unlikely they will suddenly develop a conscience and take pity on the growing welfare state, and with no worker bees paying taxes from their bygone jobs who will pay for UBI? Of course, Musk's answer is to ship us all off to Mars and develop planet earth into a luxury resort for the 1% - he hasn't specifically talked about the latter part, but I suspect that is the goal.
I agree, and am concerned about AI's use in education and the potential downstream effects on critical thinking and learning. The list of job disruption is massive, I have no doubt. I've always struggled with that point on UBI, how will governments be able to npay it, who will be able to afford anything? There are many good papers and articles on this.
... I am not going to Mars :-) But of course that could be his plan - although Larry Page co-founder of Google told Musk he is too much of a species lover.
The threat of disruption is real and mass unemployment. The loss of personal agency and thinking. So we must become high agency people ... they will thrive.
While many can indeed become high agency people, we need to take care of those who aren't.
When Larry Page told Musk he was too much of a species lover, did he specify to which species he was referring?
Humans. Page was critical of Musk caring too much for people.
Absolutely we must take care of everyone. This is why this report has value it is urging collective action and responsibility
I was being sarcastic about the species! Although, I am not sure his recent actions match Page's observations.
An excellent overview of the situation and potential paths forward. Thanks for sharing this summary, Colin.
Personally, I’m quite confident that the third path is the most likely outcome. It aligns with what we’ve done historically, using technology to amplify human capability rather than replace it entirely.
That said, I expect the transition to be uneven. Some individuals will be left behind, while others will find ways to harness these tools to enhance their lives and careers. Demographics will likely play a significant role in determining who advances and who doesn’t. While it's not a simple generational divide, older individuals may struggle to adapt, whereas younger ones will integrate AI more naturally, just as they did with smartphones and internet. Education can help shape engagement, but ultimately, I think our environment will have a greater influence. Children born today will grow up in a world where AI simply is part of the fabric of daily life.
Perhaps, in many ways, our concerns reflect a crisis of worldview more than one of technology. It’s the older generations, who are forced to let go of established frameworks and adopt entirely new ways of thinking. That may be why we’re talking about it so passionately ourselves, while many in the 20-somethings are already busy building businesses with AI at their core.
I spent a few hours this weekend listening to the Y Combinator podcast and was struck by how many young founders, especially from countries with younger populations like India and parts of Africa, are using AI to build the future. Perhaps the West feels more anxiety about AI because of our aging demographics.
The most exciting part? While the big tech players are dominating obvious use cases like customer service bots, search enhancement, and productivity tools, there are still countless opportunities to develop niche, vertical solutions. Small teams can now empower small and medium-sized businesses in ways that were never possible before.
In the end, I don’t think humans will be sidelined by AI. The greater risks may lie elsewhere - in shifting demographics, the unraveling of long-standing institutions like marriage and family, and deeper questions of rights, responsibilities, and social cohesion. Perhaps AI isn’t the crisis, but the distraction. While we fixate on its potential threats, we may be overlooking the more pressing issues that are quietly reshaping our world. Ironically, if we shift our focus, we might discover that AI has arrived just in time - not as the problem, but as a tool to help us address the challenges we’ve long ignored.
Thank you Susan for a refreshingly optimistic take on AI's potential trajectory and our societal response to it. I appreciate you laying out your confidence in the 'third path' and your nuanced view on the transition.
Your point about historical precedent, where technology has often augmented rather than entirely replaced human effort, is a valuable one. It aligns with the more hopeful Scenario 3, which envisions a "reinstatement of the value of mass expertise in new domains" through human-AI collaboration. The report itself, while cautious, does highlight the potential for AI to "enable workers to use their expertise more effectively."
The idea that younger generations, growing up with AI as a native part of their environment, will integrate it more naturally is quite plausible, similar to previous technological adoptions. This also connects to the report's emphasis on the need for education to adapt and foster lifelong learning to help all demographics navigate these shifts. Your observation that our current concerns might reflect a "crisis of worldview" for those accustomed to established frameworks, more than just a technological crisis, is a very insightful framing. The dynamism you see in younger founders, especially in regions with younger populations, certainly suggests a different lens through which AI is being approached.
The potential for small teams to leverage AI for niche, vertical solutions, empowering small and medium-sized businesses, is indeed an exciting prospect. This aligns with the democratizing potential of technology, which could counter some of the centralizing forces often associated with powerful new tools.
Your concluding thoughts are particularly provocative: the idea that AI might be a 'distraction' from more pressing, underlying societal issues, but also, paradoxically, a tool that could help us address those very challenges if we shift our focus. This turns the common narrative on its head. The National Academies' report does urge us to shape AI "in ways that are consistent with societal values and goals," and perhaps identifying and tackling those deeper societal issues you mention is a key part of defining those goals. If AI can help us manage complexities in demographics, institutional reform, or even social cohesion, then its arrival could indeed be timely in unforeseen ways.
It’s a perspective that encourages looking beyond the immediate anxieties towards a more integrated and potentially constructive role for AI in broader societal evolution.
Thank you for that line of thinking.
I have asked myself this same question over and over. If AI can write a (better) article than I can on human progress, why do I spend many hours doing so myself?
My answer is that I fear expertise erosion, which is the crux of this article. I return to the old quote from George Orwell: “If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.”
We mustn’t allow others to think for us, not even AI. Tools should always augment and expand our skills, not be allowed to diminish or erode them.
Thank you JK. That question, 'If AI can write a (better) article than I can... why do I spend many hours doing so myself?' is one that I suspect many creators, thinkers, and professionals are grappling with in very real terms.
Your answer, pointing to the fear of 'expertise erosion,' precisely identifies the core concern that the National Academies' report, and my post, attempts to show. The Orwell quote is incredibly apt and serves as a stark warning: the act of thinking, articulating, and creating is intrinsically linked to our cognitive abilities and our agency. To abdicate that process is indeed to risk having our thinking done for us. My biggest fear for society!
I wholeheartedly agree with your concluding sentiment. This is a critical distinction. The National Academies' report suggests that society has a choice in how AI is developed and deployed, whether it's primarily for automation and substitution, or for augmentation and human-AI collaboration.
The challenge, as your comment so clearly articulates, is ensuring we make choices that foster the latter. It’s about cultivating a relationship with these powerful new tools where they serve as catalysts for deeper human thinking and enhanced capabilities, rather than as replacements that lead to atrophy. Continuing to engage in the hard work of writing, thinking, and creating, even when AI offers a seemingly easier path, might be one of the most important ways we preserve and strengthen those uniquely human capacities.
Underlying all of the speculation about where AI will take us, are the power structures that exist in society. We're ruled (mostly) by psychopaths, whose primary interest is total domination of resources. While they're concentrated in silicon valley, there are also the more established fossil fuel robber barons - the Koch brothers, now just Charles come to mind - as well as billionaire moguls in other industries, such as big pharma, big agra, and of course, Wall St.
They influence political leaders through various PAC's making massive campaign donations, and getting away with it thanks to the Supreme Court ruling in the infamous "Citizens United" case. They possess depraved indifference to human life. Hence they don't concern themselves with "trivial" issues such as the definition of work and human purpose. To them, those of us who aren't "members of the club" are mere insects to be exterminated as soon as possible.
Ironically, they're writing their own epitaphs right along with ours, as the problems with burgeoning AGI begin to emerge and converge with all of the other existential threats these oligarchs are causing - from global warming to antibiotic [strikeout] impervious [/strikeout] resistant bacteria, to the next viral pandemic, etc. All exacerbated by a growing authoritarian streak around the world, but especially here in the U.S.
Our "president" is being influenced by the likes of MuskRat, Zuck, Bozo Bezos, Altman, Andreessen, Theil, who in turn are influenced by the likes of pseudo-intellectual Mencius Moldbug, AKA, Curtis Yarvin.
My apologies for sounding pessimistic, but this is what I see happening before my eyes.
Yes, that is a critical and troubling dimension, the undeniable influence of existing power structures and the potential for AI to be shaped by narrow interests rather than broader societal well-being.
The report, while more measured in its language, certainly doesn't shy away from the idea that the future of AI and work is not solely a technological question. It explicitly states that outcomes will depend heavily on "demographic, social, institutional, and political forces" and that "Absent well-functioning institutions, the committee does not presume that market outcomes will be socially desirable ones." This aligns with your core point that the agendas of those who wield significant economic and political power will play a major role.
The report's call for robust "legal, regulatory, and bargaining regimes" can be seen as an acknowledgment of the need for checks and balances against the kind of unchecked influence you describe. Issues like the erosion of expertise, worker surveillance, and deskilling, which I wrote about become even more acute if the development and deployment of AI are driven primarily by motives that deprioritize human well-being and equitable outcomes.
While the report attempts to outline "Opportunities to Influence" and pathways towards a more human-centric AI future, your comment serves as a stark reminder of the immense pressures and systemic challenges that stand in the way of achieving such a vision. The convergence of AI with other existential threats, as you point out, only heightens the stakes and the urgency for broad societal vigilance and action.
It's understandable why you feel pessimistic given the current landscape. The challenges are indeed formidable, and ensuring that AI serves humanity broadly, rather than consolidating power or exacerbating existing problems, will require a level of collective will and effective governance that is, as you imply, currently under significant strain. We must be vocal to get a better future.
Indeed, we have a long road ahead of us. Where that road will ultimately lead to - a new paradigm for civilization or a cliff - will largely be determined by randomness, unfortunately.
In the spirit of the closing sentence in your reply to my comment:
It's a great reminder of the imperative that we continue to Rise! Resist! ✊✊✊
The next nationwide rally is June 14, yes >that< June 14, be there or be square!
Let's ruin Chump's B'day. We need 3.5% or the population, or around 12,000,000 people to be present. So bring all your friends and families, bring your pets. Spread the word as far and wide as possible!
https://www.nokings.org/
AI is a labor force of polymaths at your fingertips, willing to work for pennies, able to operate at scale, and (as of now) incapable of boredom or complaint. They do not sleep. They do not unionize. They do not ask for meaning. They exist to serve. And that, paradoxically, dangerously, and thrillingly, is precisely what makes them revolutionary.
Since I have commented extensively on this topic in the last several months, I do not see much that I believe is new. However, your expansion of 3 scenarios is great. I like that it is more scenario-based and neutral, so we are not dealing with two extremes. Does it provide a probability of each scenario?
I think I will explore a few angles later today that may be covered in the full report. Since you were summarizing key areas, they may not have received more attention, but then again, we all focus on areas based on what is important to us.
First, a couple of questions: Your post states, “This scenario feels like the most probable extension of current trends, at least in the short to medium term.“
Is the short term above the next decade?
Is mid-term 2035 and beyond, and for the next few decades?
The growing presence of AI in the workforce raises several crucial questions about jobs, purpose, and societal stability. While much of the discussion centers on job loss, automation, and the potential implementation of Universal Basic Income (UBI), the issue is far more complex. To fully understand the implications, we should examine AI’s impact on work from three key angles: society, family, and the individual.
1. What Does Society Want from Work?
At the societal level, work is more than just a means of earning money—it is the backbone of stability and progress. Society depends on work to ensure that:
a) Individuals Are Paid Reasonably: A stable society requires that people earn enough to afford a decent standard of living. This reduces poverty, inequality, and social unrest.
b) People Stay Occupied and Productive: Work provides a sense of purpose and contribution, which keeps individuals engaged and prevents societal stagnation.
c) Generational Progress Continues: Society aims to educate the next generation and equip them with the tools to find meaningful work, fostering progress across generations.
2. What Does a Family Want from Work?
Work is a cornerstone of stability, security, and opportunity for families. Families depend on work to:
a) Meet Basic Needs: Work provides the income necessary for food, shelter, healthcare, and security.
b) Educate and Prepare Children: Parents strive to give their children the tools and opportunities to succeed.
c) Save for the Future: Families aim to build financial safety nets for emergencies, retirement, and long-term peace of mind.
d) Foster Happiness and Fulfillment: Beyond financial stability, families want their children to find meaningful work that aligns with their passions and values.
3. What Does an Individual Adult Want from Work?
For individuals, work is not just a source of income—it’s a source of identity, growth, and fulfillment. At the highest level, individuals look to work to:
a) Earn a Living: People need enough income to care for themselves and their loved ones.
b) Find Purpose and Meaning: Many seek work contributing to something valuable or impactful.
c) Build Expertise and Recognition: Key motivators include developing skills and being respected for contributions.
d) Grow and Learn: Work provides personal and professional development opportunities.
However, this is not universal. People’s aspirations vary:
- For Some, Work Is Survival: They work to meet basic needs, with little concern for fulfillment or growth.
- For Others, Work Is a Career: They see work as a path to advancement, achievement, and long-term goals.
- For a Few, Work Is a Calling: They pursue work that aligns deeply with their values and passions, finding profound meaning in their efforts.
I always thought of the job market as a three-legged stool, unlike media and government, which are primarily focused on the number of jobs, supported by:
1. The Number of Jobs: Are there enough jobs for everyone?
2. The Quality of Jobs: Do these jobs provide meaning, purpose, and satisfaction?
3. The Pay of Jobs: Do these jobs pay well enough to support individuals and families?
And if we apply AI and all the above, we will start seeing significant challenges that society, families, and individuals will face. AI threatens to destabilize all three legs of this stool:
- Quantity of Jobs: AI will create new jobs, but likely not at the scale required to replace the ones it eliminates.
- Quality of Jobs: Many AI-driven roles may lack the creativity and fulfillment that people value, reducing overall job satisfaction.
- Pay of Jobs: Automation could drive wages down in many sectors, exacerbating income inequality and financial insecurity.
I am not saying a large population will not be happy to get UBI so they can get away from the drudgery of their work. However, there will still be a significant population that will despise AI for taking away a lot from them unless something equally or more meaningful replaces it. The challenge is that we do not know what it is, and uncertainty in any human endeavor creates fear and resistance to change, and that’s what AI deployment at a large scale will face when it starts taking humans away from the driving seat at work.
Unfortunately, the above is not the whole story about significant challenges. AI’s impact on work raises deep societal and philosophical questions (I have written about the below under your posts and other people’s posts):
1. The Purpose of Education:
- If jobs are scarce, what is the purpose of education?
- Should education focus on fostering creativity, adaptability, and emotional intelligence rather than workforce preparation?
2. Wealth Redistribution:
- As AI increases productivity, how do we ensure the gains are distributed equitably?
- Developing countries, which often rely on labor-intensive industries, may face unique challenges as they compete with AI-driven economies.
3. Finding Purpose Beyond Work: How do we redefine these concepts if work is no longer a central source of purpose and identity?
- UBI may address financial needs but does not provide the fulfillment that meaningful work offers.
4. Global Inequality: Developing countries may struggle to implement UBI or compete in a world dominated by AI. How can we ensure they are not left behind?
I will write more later, as I have not talked much about the role of the government and the tech industry. I will end with a quote from Franklin D. Roosevelt:
"The test of our progress is not whether we add more to the abundance of those who have much; it is whether we provide enough for those who have too little."
You raise some interesting and rather complicated questions here. We would do well to prioritize the issues you raise and address the most pressing one first: income/wealth distribution.
I'm a big fan of a universal basic income (UBI). Once that's in place, the other issues will be much easier to contend with.
The vast majority or people don't find meaning in work. Indeed, just the opposite, most jobs vacuum the meaning out of life for most people. While it's true that those fortunate enough to be in a chosen profession find meaning in their work, they represent a minority.
With a UBI in place, people are free to decide what would give them meaning. Certainly education could - and should - be transformed into something more meaningful than mere preparation for a higher level job. This could be helpful for people to find meaning and purpose for themselves.
I always like to say the true measure of civilization isn't how powerful the powerful become but how effectively we empower the disempowered.
That great quote from FDR you end with reminds me of this.
Beautiful and so true - " the true measure of civilization isn't how powerful the powerful become but how effectively we empower the disempowered."
I will reply in detail during the day - in the meantime, your quote from FDR beautifully encapsulates the ethical imperative. The progress AI brings should be measured by its ability to uplift all segments of society, particularly the most vulnerable. This requires proactive, thoughtful, and globally coordinated efforts from governments, industry, academia, and civil society.
Your concern that AI threatens all three legs of the job market stool, quantity, quality, and pay, is a central anxiety that the report grapples with.
The report notes uncertainty about the net effect on jobs but highlights historical precedents where technology has both displaced and created work. Your point that new jobs may not emerge at the required scale is a valid concern I share.
The "erosion of expertise" theme from the report and my thoughts, speaks directly to your concern about job quality and fulfillment. If AI automates engaging tasks or deskills roles, job satisfaction is indeed at risk. The report suggests intentional design choices can influence whether AI augments or diminishes job quality (Finding 7).
The report (Finding 8) acknowledges that "even if AI yields significantly higher worker productivity, the productivity gains might fall unevenly across the workforce and might not be reflected in broad-based wage growth," aligning with your fears about wage depression and exacerbated inequality.
Your point about UBI addressing financial needs but not necessarily the human need for purpose and meaning is critical. The resistance AI might face if it's perceived as merely "taking humans away from the driving seat at work" without offering equally or more meaningful alternatives is a significant hurdle.