Very interesting read but the truth is it is highly unlikely that humans and ASI will co-exist so working towards that goal, which is the GOAL, is suicide.
Also, when the software pushes out wrong information it isn’t a “hallucination”, it is an error, a mistake. Trying to mitigate this fact reveals the hubris and ultimate idiocy behind this entire endeavor.
Despite the headline, this seems like a balanced write up. I agree that companies should NOT think “competence can be simulated, and thus substituted”.
AI still has a long way to go. And governance systems even have longer way. But of course the new systems can’t be under estimated. Good stuff
In fields of engineering other than software, we're used to dealing with noise. A material spec'd to a specific standard will vary. When we design something, we deal with this reality.
In software, until now, everything has been deterministic. We programmed expecting this. For the first time, we have software tools that aren't deterministic.
I believe we need to adapt to this.
To your point that AGI and Humans can't coexist - I'm not convinced one way or another. Perhap we'll have a symbiotic relationship like the Egyptian Plover.
I recall reading an article published in the New York Times more than a century ago speaking about the ills of the refrigerator and how using ice is much more natural. We can debate the merits but overall, refrigeration has been good for humanity.
To your point, maybe we'll switch roles - humans as Plover sometimes, and AGI other times.
Ultimately, even governments will not stop humans from exploring. If AGI is possible, we're going to create it. I'm of the opinion that the current approaches will not lead to AGI. We just have super powerful differential engines.
I haven't read anyone suggesting that the LLM models actually understand. The fact that it's trivial to bypass security measures just confirms the point :)
Sam Altman’s prediction that billion-dollar companies will soon be staffed by a single individual and a swarm of AIs is not science fiction
reality is if this happens that 1 Individual will be hiring hundreds of armed and trained people to guard them and their family to prevent them from being kidnapped and killed.
if we get to a point where 98% of the population doesn’t have a means to provide for themselves they’ll make one through violence… like all of human history has shown us. The rich know this because they’re building bunkers and desperately trying to leave the planet.
All of these guides on how we are supposed to interact with the Machine and the kinds of human knowledge that the Machine cannot replicate are clearly thought out by and for people whose lives and careers are already well underway, who have already spent years and even decades cultivating the expertise that they expect to guide their interaction with and training of the Machine. All this by its very function denies future generations the opportunity to cultivate that expertise. What child born in the last ten years who wants to become a nurse--whatever that would even be by the time they enter a workforce that may not even exist--is going to build a diagnostic hunch over thousands of bodies and hours when Machines have long been trained to do all that by the current technocrat elite? You might say the nurse will gain those hours watching the machine but at that point the nurse is no longer the guide but the guided, with no experience to draw from that wasn't curated by the very system this imagines them shepherding. AI is not merely a breathtaking and unfathomable power and wealth grab by the already wealthiest and most powerful ("most dexterous," as though that inborn talent has nothing to do with class), it is also setting up a profound and irrevocable generational theft. Adult professionals and technocrats leveraging their own expertise in total indifference to anything that comes after.
This. I had the same thought that this ‘tacit knowledge’ argument only lasts for a single generation, after which the AI is more likely to hold it, by the definition.
The recognition of the value of tacit knowledge built up over years of hands-on experience may be a key in keeping AI in check. But what I generally pick up is the framework of the AI debate is materialist - ie: humans are nothing more than 'meat-machines' - and that tacit knowledge is thereby viewed as errantly-subjective, or unreliable, and therefore a ('fickle') variable to be eliminated, rather than treasured.
Although a nod is given to humans being more than mere 'meat-machines' (humans have soul, spirit, 6th/7th/8th senses and more besides), this fact is swept under the carpet as a nuisance when contemplating the wide-eyed technical and financial possibilities of AI unconstrained.
If AI cannot "ingest" tacit knowledge (which I agree it can't) and one is left only with some input as to how AI is deployed ... well, isn't that a bit like military people claiming smart-bombs are designed to destroy buildings not people ... but when they are deployed ....?
Everything AI ‘knows’ it stole from a human (likely by violating copyright). It’s also prone to hallucinations (see the recent article in the Chicago Sun Times recommending 4 books for summer reading that don’t exist!).
Part 1 (I reached the maximum character limit for a comment): I apologize for the long comment. The topic required me to go into more detail.
You have done an excellent job of explaining his rationale and building a case for his statements. Below, I'll share my thoughts on some of the key statements in the post, offering alternative perspectives to the discussion.
1. "Expertise has not disappeared altogether, but he fears it will soon."
I do not believe expertise will disappear anytime soon, though its nature may evolve significantly as technology advances. There is always a tendency to extrapolate current trends and assume that progress will continue indefinitely. However, this is rarely the case, as edge cases and last-mile challenges will keep expertise relevant for decades. Expertise is uniquely suited to addressing situations where automation or AI may struggle. That said, the next generation may be the last to fully develop and apply deep expertise if AI continues to evolve at its current pace. It's not that we won't need expertise—on the contrary, it will remain crucial—but the challenge will lie in ensuring people continue to acquire and retain it. This reminds me of something I read a few years ago: a doctor said he wouldn't trust a surgeon trained in the last 20 years. He reasoned that modern surgeons rely heavily on robotic arms and don't develop the same level of manual skill. If the robotic arm malfunctions, these surgeons may lack the expertise to adapt or save the situation. As we increasingly rely on AI and automation, we risk losing the intuition, tacit knowledge, and common sense from hands-on experience and deep learning. These qualities are deeply human and not easily replicated by machines. While AI can assist and enhance decision-making, it cannot replace the expertise from years of practice and problem-solving. Ultimately, expertise isn't disappearing but risks being undervalued or underdeveloped. We can preserve its role in a rapidly changing world by recognizing its importance and adapting to new challenges.
2. "The classroom he imagines is not a sanctuary from disruption but a rehearsal space for it. The goal is not to outsmart the machine but to learn to ask it better questions. A form of meta-literacy emerges—a capacity not merely to consume AI-generated content but to interrogate its assumptions, test its boundaries, and refine its purpose."
Even to ask better questions, you need expertise. I am far better at asking meaningful questions now than ever, and this ability did not come solely from the fundamentals I learned in school. It came from real-world interactions, continuous learning, tacit knowledge, making mistakes, reflecting on those mistakes, and adapting to the constant changes in my field. This process has been compounded by reading/thinking 30-40 hours weekly for over 25 years. Everything compounds over time—whether good habits like reading or bad habits.
If we become passive receivers of information—treating AI outputs as infallible or "God's word"—we will be in serious trouble. Unfortunately, many people will likely fall into this trap, and society could face significant challenges. The ability to critically engage with AI, question its assumptions, and understand its limitations will be essential to avoid such pitfalls.
3. "The psychological toll of this shift is not incidental. It can evoke dislocation, anxiety, even loss."
I believe our generation will likely be the last to start and end a career in the same field. The transition to a world of frequent career changes will be challenging for most people. Those who embrace lifelong learning and adaptability will thrive, but many will struggle to keep up due to lacking resources or foundational skills. As a result, job satisfaction and motivation will likely decline significantly, as not every career change will lead to fulfilling work. This is deeply concerning, and I feel for those left behind.
Developing countries will face the brunt of this disruption. Automation could wipe out a large share of low-skill jobs, and these nations often lack the financial resources to implement effective retraining programs or safety nets like Universal Basic Income (UBI). This economic displacement could lead to widespread social unrest, civil wars, and the rise of populist governments promising simple solutions but failing to address root problems.
Globally, the inability to adapt to job losses caused by AI could trigger political instability. Populist governments, like those we already see emerging in many parts of the world, will likely gain more traction as inequality and frustration grow. If we cannot find ways to address the economic and psychological consequences of mass unemployment, such as implementing UBI, UBI alone won't be enough. We will have to redefine the role of work in society. Still, we will enter uncharted territory without a significant focus on the social aspect of technological change in the coming years. This shift could lead to widespread societal fragmentation and instability without meaningful solutions.
4. "If you are not scared of AI, you are not paying enough attention." He adds: "If you are competing against AI, you will lose."
As I said in a recent comment at https://tinyurl.com/4spz3bwu, I do not think we need to be scared or panic. We need a proactive action plan that begins in schools and emphasizes critical thinking and AI literacy. As with previous technological shifts, the key lies in guiding society to adapt thoughtfully rather than react with fear or blind acceptance.
We are dealing with two distinct groups of people regarding AI adoption. The first group will unquestioningly trust AI outputs, relying on their responses' convenience, speed, and polished authority without verifying their accuracy. Automation bias, cognitive laziness, and a lack of AI literacy will likely drive this behavior, as we have already seen with tools like Google or GPS. This passive reliance may reinforce risks like misinformation, cognitive atrophy, and blind trust in technology as an "infallible" source of truth.
The second group, however, will critically evaluate AI outputs, particularly in high-stakes life and business scenarios. Over time, this segment will grow, driven by education, cultural shifts, and encounters with AI's limitations (such as hallucinations or biased outputs).
For the majority, though, deliberate and sustained efforts are needed to prevent blind trust in AI from becoming the default behavior. Education systems must prioritize teaching AI literacy and critical thinking early, and developers must build tools that encourage verification and skepticism rather than passive acceptance. Without these intentional steps, we risk creating a society that amplifies the dangers of misinformation and over-reliance rather than harnessing AI's potential to enhance human intelligence and decision-making.
5. "When Cowen turns to geopolitics, his tone tightens. Small countries, he warns, will not build their own AI systems."
I agree with Cowen that small countries are unlikely to build their own AI systems, but I would also extend this observation to most mid-sized countries. Developing competitive AI models requires immense resources, technical expertise, and robust infrastructure—challenges only large countries or consortiums of nations or regional players can address. However, strategic partnerships, such as the UAE's recent initiative with OpenAI, demonstrate how smaller nations can still significantly shape the global AI landscape. The UAE's decision to provide free ChatGPT Plus subscriptions to all residents as part of its collaboration with OpenAI is a bold example of how governments can leverage partnerships to integrate AI into society effectively. You can read more about it here: https://tinyurl.com/bdee5h23. This initiative is part of a larger Stargate UAE project, which aims to establish the UAE as a central hub for AI innovation. Investing in a one-gigawatt AI supercomputing cluster, the UAE is positioning itself as a key player in the AI ecosystem and demonstrating how smaller nations can punch above their weight through strategic investments and collaborations.
However, such partnerships raise critical questions about governance and ethics. For instance, will governments demand that AI systems align with their specific definitions of "Capital T Truth," customizing models to reflect national ideologies? Or will there be a broader consensus on global truth standards? This could lead to significant fragmentation in how AI systems operate across borders, potentially undermining efforts to create transparent and universally accessible AI systems.
I estimate that we will see about 10 (±3) global AI models in the future, but only a select few will dominate the landscape. These dominant models won't necessarily be restricted to the US and China. Countries like the EU and other regional powers, leveraging collective resources and regulatory frameworks, are well-positioned to play significant roles. Such consortiums offer a promising path for mid-sized and smaller nations to maintain relevance in the AI race.
Ultimately, the future of AI geopolitics will hinge on how countries and alliances collaborate to shape models that balance innovation, transparency, and ethical considerations. It's not just about who builds the models—it's about how they are governed and adapted to serve humanity's diverse and evolving needs. As AI continues to shape the global landscape, strategic partnerships, and thoughtful governance will ensure its benefits are shared equitably.
Part 2: I will end with a story from Technopoly: The Surrender of Culture to Technology by Neil Postman that I just started:
"The story, as Socrates tells it to his friend Phaedrus, unfolds in the following way: Thamus once entertained the god Theuth, who was the inventor of many things, including number, calculation, geometry, astronomy, and writing. Theuth exhibited his inventions to King Thamus, claiming that they should be made widely known and available to Egyptians. Socrates continues:
Thamus inquired into the use of each of them, and as Theuth went through them expressed approval or disapproval, according as he judged Theuth’s claims to be well or ill founded. It would take too long to go through all that Thamus is reported to have said for and against each of Theuth’s inventions. But when it came to writing, Theuth declared, "Here is an accomplishment, my lord the King, which will improve both the wisdom and the memory of the Egyptians. I have discovered a sure receipt for memory and wisdom." Thamus replied, "Theuth, my paragon of inventors, the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it. So it is in this; you, who are the father of writing, have out of fondness for your off-spring attributed to it quite the opposite of its real function. Those who acquire it will cease to exercise their memory and become forgetful; they will rely on writing to bring things to their remembrance by external signs instead of by their own internal resources. What you have discovered is a receipt for recollection, not for memory. And as for wisdom, your pupils will have the reputation for it without the reality: they will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society."
What this story teaches us: The same risks Thamus highlighted with writing—dependency, superficial understanding, and the illusion of wisdom—will be amplified in the AI era. Every technology comes with benefits and trade-offs. Societies must acknowledge both to ensure that progress enriches rather than diminishes human potential. Are we going to ensure that this happens? Only time will tell.
Thank you MG, very thought provoking. I meant to link to my post on expertise and how it will build - we definitely agree with that. Briefly on the core points you share, because we are 100% on the same page.
On Expertise and Asking Better Questions:
You are absolutely right. The central paradox is that while AI might seem to lower the barrier to entry for information, the ability to ask truly insightful questions and critically evaluate the output requires more expertise, not less. Your surgeon analogy is the perfect illustration of the risk we face: the potential atrophy of deep, embodied skill if we're not intentional. This connects directly to Tyler's idea that the most crucial human role will be that of a skeptical guide, not a passive user, a role grounded in the very tacit knowledge we both describe.
On the Socio-Political Consequences:
I completely agree with your analysis of the psychological toll and its global ramifications. Your extension of the problem to developing nations, highlighting the risks of mass unemployment leading to social unrest and populism, is a crucial point. It underscores that solutions like UBI, while part of the conversation, are insufficient without a deeper rethinking of work, meaning, and social cohesion. This is where a proactive plan is not just preferable to fear, but an absolute necessity to avoid the fragmentation you warn of.
On Geopolitics and Strategic Partnerships:
Your point about strategic partnerships is a fantastic addition. The UAE/OpenAI initiative is a prime example of how the geopolitical landscape is more complex than a simple US/China duopoly. It adds a vital layer to the idea of "epistemic nimbleness", it's not just about how a populace uses tools, but how their governments can strategically partner to gain access, influence, and a stake in the governance of these powerful models. I am not so sure about the EU though, although the UK will be fine!
The Story of Thamus and Theuth:
Thank you for sharing that wonderful passage from Neil Postman. It's the perfect historical parallel. The fear that writing would create "the conceit of wisdom instead of real wisdom" is precisely the challenge we face today, but amplified a thousandfold. The inventor of a technology is rarely the best judge of its long-term effects on society.
Very well said overall, there is much to ponder and for us as individuals and a community to take mindful action.
We often demand perfection from machines, even though we humans are far from perfect. This is a fundamental paradox that we mostly overlook, which I propose calling the "Expectation Paradox." While I am unsure if this concept has been formally named, it encapsulates the contradiction in holding machines to standards that far exceed our own capabilities.
Our errors typically arise from limitations such as incomplete knowledge, emotional interference, fatigue, or distraction—factors intrinsic to our biological nature. On the other hand, machines make mistakes due to flaws in programming, insufficient or bad data, or their inability to generalize beyond predefined parameters. A critical distinction is that machines lack lived experience, intuition, or a sense of consciousness—qualities that enable humans to navigate uncertain or novel situations effectively.
To truly advance AI and robotics (used interchangeably here), three significant challenges must be addressed:
1. Simulating Sensory Perception and Subjective Interpretation: Machines must be equipped to interact meaningfully with the real world. This requires processing sensory input and interpreting experiences in a way that mirrors subjective human understanding. Achieving this will necessitate a combination of advanced sensory systems and a framework for contextual comprehension.
2. Achieving Consciousness: Consciousness is one of the most profound and complex challenges in AI. It entails self-awareness, subjective experience, and the ability to reflect on existence. While increasing computational power may enhance performance, it is unlikely to solve the consciousness problem outright. If machines were to achieve consciousness, they would fundamentally redefine concepts of intelligence, agency, and morality.
3. Attaining Human-Level Dexterity: Physical dexterity and adaptability are essential for machines to perform tasks with the precision, flexibility, and problem-solving ability of humans. Despite significant progress in robotic hardware and control systems, achieving human-level dexterity remains a formidable hurdle.
If and when these challenges are overcome, the potential for machines to replace humans across all fields becomes a real possibility.
By the way - I also agree that a proactive plan is the necessary response to this shift, my use of Cowen's word "scared" isn't meant to inspire panic, but the kind of urgent alertness that leads to the very educational and institutional reforms you advocate for.
In my experience, panic and fear often go hand in hand, and they tend to drive decisions more than rationality or foresight. However, I'm not expecting significant progress anytime soon when it comes to meaningful reforms—be it in the education or government. I believe real change will only happen when the system is forced to act because it has no other choice.
The problem is deeply rooted in the political system in the United States. It is primarily focused on short-term goals, namely the next election cycle. Politicians rarely, if ever, engage in long-term planning. While laws and policies are often presented as forward-thinking, they are more often superficial gestures meant to create the appearance of progress rather than actual solutions.
True forward-thinking requires two critical abilities:
1. Prediction: The capacity to anticipate future challenges and opportunities.
2. Second-order thinking: The ability to consider decisions' ripple effects and long-term consequences.
Unfortunately, these skills are glaringly absent in most of the political class. Instead, decision-making is heavily influenced by lobbyists and special interest groups, which steer policies to serve their agendas rather than the public good. This creates a vicious cycle where politicians cater to powerful lobbies to maintain funding and influence, further undermining the potential for meaningful reform.
In short, we are stuck in a system prioritizing short-term wins over long-term solutions, and the influence of money and lobbying only exacerbates the problem. Until these fundamental issues are addressed, it’s hard to imagine a future where real progress is made. In other words, we are doomed. A meaningful change or reform will require collective action from the people, creating pressure that the system cannot ignore.
Very interesting read but the truth is it is highly unlikely that humans and ASI will co-exist so working towards that goal, which is the GOAL, is suicide.
Also, when the software pushes out wrong information it isn’t a “hallucination”, it is an error, a mistake. Trying to mitigate this fact reveals the hubris and ultimate idiocy behind this entire endeavor.
Despite the headline, this seems like a balanced write up. I agree that companies should NOT think “competence can be simulated, and thus substituted”.
AI still has a long way to go. And governance systems even have longer way. But of course the new systems can’t be under estimated. Good stuff
Yes this is a major problem 😎
It's an error. This is undeniable.
In fields of engineering other than software, we're used to dealing with noise. A material spec'd to a specific standard will vary. When we design something, we deal with this reality.
In software, until now, everything has been deterministic. We programmed expecting this. For the first time, we have software tools that aren't deterministic.
I believe we need to adapt to this.
To your point that AGI and Humans can't coexist - I'm not convinced one way or another. Perhap we'll have a symbiotic relationship like the Egyptian Plover.
Your optimism that ASI will allow us to adapt to its arrival is a curious take.
Who do you imagine is the plover?
I recall reading an article published in the New York Times more than a century ago speaking about the ills of the refrigerator and how using ice is much more natural. We can debate the merits but overall, refrigeration has been good for humanity.
To your point, maybe we'll switch roles - humans as Plover sometimes, and AGI other times.
Ultimately, even governments will not stop humans from exploring. If AGI is possible, we're going to create it. I'm of the opinion that the current approaches will not lead to AGI. We just have super powerful differential engines.
I haven't read anyone suggesting that the LLM models actually understand. The fact that it's trivial to bypass security measures just confirms the point :)
This article doesn't exist. In fact, people welcomed artificial refrigeration and were mostly concerned about its price.
https://www.vpostrel.com/articles/notes-on-progress-artificial-flavoring-1
They may be paywalled but here are some early NYT reactions:
1870 article: https://nyti.ms/3FvKwle
Letter to the editor: https://nyti.ms/3FvKwle
No AI was used in this research but it would have been difficult before searchable newspaper databases.
You are comparing a rock to a landslide.
A hardware component isn’t anything like an alien intelligence.
AGI is not the goal, ASI is and when it arrives there will be no negotiation that will allow humanity to survive.
There will be no need for a plover as the machines will have no worries about lice.
If it happens in our lifetime, we'll have a front row seat :)
Your enthusiasm for the arrival of humanity’s extinction is bizarre to say the least.
“A deliberate rejection of premature certainty” is wise. Thanks for this.
Very glad it connected with you, thank you for sharing.
Sam Altman’s prediction that billion-dollar companies will soon be staffed by a single individual and a swarm of AIs is not science fiction
reality is if this happens that 1 Individual will be hiring hundreds of armed and trained people to guard them and their family to prevent them from being kidnapped and killed.
if we get to a point where 98% of the population doesn’t have a means to provide for themselves they’ll make one through violence… like all of human history has shown us. The rich know this because they’re building bunkers and desperately trying to leave the planet.
All of these guides on how we are supposed to interact with the Machine and the kinds of human knowledge that the Machine cannot replicate are clearly thought out by and for people whose lives and careers are already well underway, who have already spent years and even decades cultivating the expertise that they expect to guide their interaction with and training of the Machine. All this by its very function denies future generations the opportunity to cultivate that expertise. What child born in the last ten years who wants to become a nurse--whatever that would even be by the time they enter a workforce that may not even exist--is going to build a diagnostic hunch over thousands of bodies and hours when Machines have long been trained to do all that by the current technocrat elite? You might say the nurse will gain those hours watching the machine but at that point the nurse is no longer the guide but the guided, with no experience to draw from that wasn't curated by the very system this imagines them shepherding. AI is not merely a breathtaking and unfathomable power and wealth grab by the already wealthiest and most powerful ("most dexterous," as though that inborn talent has nothing to do with class), it is also setting up a profound and irrevocable generational theft. Adult professionals and technocrats leveraging their own expertise in total indifference to anything that comes after.
This. I had the same thought that this ‘tacit knowledge’ argument only lasts for a single generation, after which the AI is more likely to hold it, by the definition.
Yes and this technology is so lame. It’s going to enable people not replace them.
The recognition of the value of tacit knowledge built up over years of hands-on experience may be a key in keeping AI in check. But what I generally pick up is the framework of the AI debate is materialist - ie: humans are nothing more than 'meat-machines' - and that tacit knowledge is thereby viewed as errantly-subjective, or unreliable, and therefore a ('fickle') variable to be eliminated, rather than treasured.
Although a nod is given to humans being more than mere 'meat-machines' (humans have soul, spirit, 6th/7th/8th senses and more besides), this fact is swept under the carpet as a nuisance when contemplating the wide-eyed technical and financial possibilities of AI unconstrained.
If AI cannot "ingest" tacit knowledge (which I agree it can't) and one is left only with some input as to how AI is deployed ... well, isn't that a bit like military people claiming smart-bombs are designed to destroy buildings not people ... but when they are deployed ....?
Everything AI ‘knows’ it stole from a human (likely by violating copyright). It’s also prone to hallucinations (see the recent article in the Chicago Sun Times recommending 4 books for summer reading that don’t exist!).
Yes, that was a bizarre post in the Chicago Sun Times.
Okay, I'm scared. Now what?
Feel the fear and dive in :-)
It feels like an AI abyss.
Buy a gun learn to use it 😀
Interesting. What would Tyler Cowen make of this?
https://www.axios.com/2025/05/23/anthropic-ai-deception-risk
Part 1 (I reached the maximum character limit for a comment): I apologize for the long comment. The topic required me to go into more detail.
You have done an excellent job of explaining his rationale and building a case for his statements. Below, I'll share my thoughts on some of the key statements in the post, offering alternative perspectives to the discussion.
1. "Expertise has not disappeared altogether, but he fears it will soon."
I do not believe expertise will disappear anytime soon, though its nature may evolve significantly as technology advances. There is always a tendency to extrapolate current trends and assume that progress will continue indefinitely. However, this is rarely the case, as edge cases and last-mile challenges will keep expertise relevant for decades. Expertise is uniquely suited to addressing situations where automation or AI may struggle. That said, the next generation may be the last to fully develop and apply deep expertise if AI continues to evolve at its current pace. It's not that we won't need expertise—on the contrary, it will remain crucial—but the challenge will lie in ensuring people continue to acquire and retain it. This reminds me of something I read a few years ago: a doctor said he wouldn't trust a surgeon trained in the last 20 years. He reasoned that modern surgeons rely heavily on robotic arms and don't develop the same level of manual skill. If the robotic arm malfunctions, these surgeons may lack the expertise to adapt or save the situation. As we increasingly rely on AI and automation, we risk losing the intuition, tacit knowledge, and common sense from hands-on experience and deep learning. These qualities are deeply human and not easily replicated by machines. While AI can assist and enhance decision-making, it cannot replace the expertise from years of practice and problem-solving. Ultimately, expertise isn't disappearing but risks being undervalued or underdeveloped. We can preserve its role in a rapidly changing world by recognizing its importance and adapting to new challenges.
2. "The classroom he imagines is not a sanctuary from disruption but a rehearsal space for it. The goal is not to outsmart the machine but to learn to ask it better questions. A form of meta-literacy emerges—a capacity not merely to consume AI-generated content but to interrogate its assumptions, test its boundaries, and refine its purpose."
Even to ask better questions, you need expertise. I am far better at asking meaningful questions now than ever, and this ability did not come solely from the fundamentals I learned in school. It came from real-world interactions, continuous learning, tacit knowledge, making mistakes, reflecting on those mistakes, and adapting to the constant changes in my field. This process has been compounded by reading/thinking 30-40 hours weekly for over 25 years. Everything compounds over time—whether good habits like reading or bad habits.
If we become passive receivers of information—treating AI outputs as infallible or "God's word"—we will be in serious trouble. Unfortunately, many people will likely fall into this trap, and society could face significant challenges. The ability to critically engage with AI, question its assumptions, and understand its limitations will be essential to avoid such pitfalls.
3. "The psychological toll of this shift is not incidental. It can evoke dislocation, anxiety, even loss."
I believe our generation will likely be the last to start and end a career in the same field. The transition to a world of frequent career changes will be challenging for most people. Those who embrace lifelong learning and adaptability will thrive, but many will struggle to keep up due to lacking resources or foundational skills. As a result, job satisfaction and motivation will likely decline significantly, as not every career change will lead to fulfilling work. This is deeply concerning, and I feel for those left behind.
Developing countries will face the brunt of this disruption. Automation could wipe out a large share of low-skill jobs, and these nations often lack the financial resources to implement effective retraining programs or safety nets like Universal Basic Income (UBI). This economic displacement could lead to widespread social unrest, civil wars, and the rise of populist governments promising simple solutions but failing to address root problems.
Globally, the inability to adapt to job losses caused by AI could trigger political instability. Populist governments, like those we already see emerging in many parts of the world, will likely gain more traction as inequality and frustration grow. If we cannot find ways to address the economic and psychological consequences of mass unemployment, such as implementing UBI, UBI alone won't be enough. We will have to redefine the role of work in society. Still, we will enter uncharted territory without a significant focus on the social aspect of technological change in the coming years. This shift could lead to widespread societal fragmentation and instability without meaningful solutions.
4. "If you are not scared of AI, you are not paying enough attention." He adds: "If you are competing against AI, you will lose."
As I said in a recent comment at https://tinyurl.com/4spz3bwu, I do not think we need to be scared or panic. We need a proactive action plan that begins in schools and emphasizes critical thinking and AI literacy. As with previous technological shifts, the key lies in guiding society to adapt thoughtfully rather than react with fear or blind acceptance.
We are dealing with two distinct groups of people regarding AI adoption. The first group will unquestioningly trust AI outputs, relying on their responses' convenience, speed, and polished authority without verifying their accuracy. Automation bias, cognitive laziness, and a lack of AI literacy will likely drive this behavior, as we have already seen with tools like Google or GPS. This passive reliance may reinforce risks like misinformation, cognitive atrophy, and blind trust in technology as an "infallible" source of truth.
The second group, however, will critically evaluate AI outputs, particularly in high-stakes life and business scenarios. Over time, this segment will grow, driven by education, cultural shifts, and encounters with AI's limitations (such as hallucinations or biased outputs).
For the majority, though, deliberate and sustained efforts are needed to prevent blind trust in AI from becoming the default behavior. Education systems must prioritize teaching AI literacy and critical thinking early, and developers must build tools that encourage verification and skepticism rather than passive acceptance. Without these intentional steps, we risk creating a society that amplifies the dangers of misinformation and over-reliance rather than harnessing AI's potential to enhance human intelligence and decision-making.
5. "When Cowen turns to geopolitics, his tone tightens. Small countries, he warns, will not build their own AI systems."
I agree with Cowen that small countries are unlikely to build their own AI systems, but I would also extend this observation to most mid-sized countries. Developing competitive AI models requires immense resources, technical expertise, and robust infrastructure—challenges only large countries or consortiums of nations or regional players can address. However, strategic partnerships, such as the UAE's recent initiative with OpenAI, demonstrate how smaller nations can still significantly shape the global AI landscape. The UAE's decision to provide free ChatGPT Plus subscriptions to all residents as part of its collaboration with OpenAI is a bold example of how governments can leverage partnerships to integrate AI into society effectively. You can read more about it here: https://tinyurl.com/bdee5h23. This initiative is part of a larger Stargate UAE project, which aims to establish the UAE as a central hub for AI innovation. Investing in a one-gigawatt AI supercomputing cluster, the UAE is positioning itself as a key player in the AI ecosystem and demonstrating how smaller nations can punch above their weight through strategic investments and collaborations.
However, such partnerships raise critical questions about governance and ethics. For instance, will governments demand that AI systems align with their specific definitions of "Capital T Truth," customizing models to reflect national ideologies? Or will there be a broader consensus on global truth standards? This could lead to significant fragmentation in how AI systems operate across borders, potentially undermining efforts to create transparent and universally accessible AI systems.
I estimate that we will see about 10 (±3) global AI models in the future, but only a select few will dominate the landscape. These dominant models won't necessarily be restricted to the US and China. Countries like the EU and other regional powers, leveraging collective resources and regulatory frameworks, are well-positioned to play significant roles. Such consortiums offer a promising path for mid-sized and smaller nations to maintain relevance in the AI race.
Ultimately, the future of AI geopolitics will hinge on how countries and alliances collaborate to shape models that balance innovation, transparency, and ethical considerations. It's not just about who builds the models—it's about how they are governed and adapted to serve humanity's diverse and evolving needs. As AI continues to shape the global landscape, strategic partnerships, and thoughtful governance will ensure its benefits are shared equitably.
Part 2: I will end with a story from Technopoly: The Surrender of Culture to Technology by Neil Postman that I just started:
"The story, as Socrates tells it to his friend Phaedrus, unfolds in the following way: Thamus once entertained the god Theuth, who was the inventor of many things, including number, calculation, geometry, astronomy, and writing. Theuth exhibited his inventions to King Thamus, claiming that they should be made widely known and available to Egyptians. Socrates continues:
Thamus inquired into the use of each of them, and as Theuth went through them expressed approval or disapproval, according as he judged Theuth’s claims to be well or ill founded. It would take too long to go through all that Thamus is reported to have said for and against each of Theuth’s inventions. But when it came to writing, Theuth declared, "Here is an accomplishment, my lord the King, which will improve both the wisdom and the memory of the Egyptians. I have discovered a sure receipt for memory and wisdom." Thamus replied, "Theuth, my paragon of inventors, the discoverer of an art is not the best judge of the good or harm which will accrue to those who practice it. So it is in this; you, who are the father of writing, have out of fondness for your off-spring attributed to it quite the opposite of its real function. Those who acquire it will cease to exercise their memory and become forgetful; they will rely on writing to bring things to their remembrance by external signs instead of by their own internal resources. What you have discovered is a receipt for recollection, not for memory. And as for wisdom, your pupils will have the reputation for it without the reality: they will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom they will be a burden to society."
What this story teaches us: The same risks Thamus highlighted with writing—dependency, superficial understanding, and the illusion of wisdom—will be amplified in the AI era. Every technology comes with benefits and trade-offs. Societies must acknowledge both to ensure that progress enriches rather than diminishes human potential. Are we going to ensure that this happens? Only time will tell.
Thank you MG, very thought provoking. I meant to link to my post on expertise and how it will build - we definitely agree with that. Briefly on the core points you share, because we are 100% on the same page.
On Expertise and Asking Better Questions:
You are absolutely right. The central paradox is that while AI might seem to lower the barrier to entry for information, the ability to ask truly insightful questions and critically evaluate the output requires more expertise, not less. Your surgeon analogy is the perfect illustration of the risk we face: the potential atrophy of deep, embodied skill if we're not intentional. This connects directly to Tyler's idea that the most crucial human role will be that of a skeptical guide, not a passive user, a role grounded in the very tacit knowledge we both describe.
On the Socio-Political Consequences:
I completely agree with your analysis of the psychological toll and its global ramifications. Your extension of the problem to developing nations, highlighting the risks of mass unemployment leading to social unrest and populism, is a crucial point. It underscores that solutions like UBI, while part of the conversation, are insufficient without a deeper rethinking of work, meaning, and social cohesion. This is where a proactive plan is not just preferable to fear, but an absolute necessity to avoid the fragmentation you warn of.
On Geopolitics and Strategic Partnerships:
Your point about strategic partnerships is a fantastic addition. The UAE/OpenAI initiative is a prime example of how the geopolitical landscape is more complex than a simple US/China duopoly. It adds a vital layer to the idea of "epistemic nimbleness", it's not just about how a populace uses tools, but how their governments can strategically partner to gain access, influence, and a stake in the governance of these powerful models. I am not so sure about the EU though, although the UK will be fine!
The Story of Thamus and Theuth:
Thank you for sharing that wonderful passage from Neil Postman. It's the perfect historical parallel. The fear that writing would create "the conceit of wisdom instead of real wisdom" is precisely the challenge we face today, but amplified a thousandfold. The inventor of a technology is rarely the best judge of its long-term effects on society.
Very well said overall, there is much to ponder and for us as individuals and a community to take mindful action.
An example from another industry (Legal):
But if technology can do 80% of what lawyers traditionally handle, what's our future?
I believe the attorneys still practicing in 10-20 years will be those who excel at what AI cannot do:
• Building genuine trust relationships
• Providing empathy during difficult situations
• Offering nuanced counsel that considers human factors
https://tinyurl.com/39cxux8n
I agree with these 3 points... the question also becomes one of profit, but for sure humans will have roles :-)
We often demand perfection from machines, even though we humans are far from perfect. This is a fundamental paradox that we mostly overlook, which I propose calling the "Expectation Paradox." While I am unsure if this concept has been formally named, it encapsulates the contradiction in holding machines to standards that far exceed our own capabilities.
Our errors typically arise from limitations such as incomplete knowledge, emotional interference, fatigue, or distraction—factors intrinsic to our biological nature. On the other hand, machines make mistakes due to flaws in programming, insufficient or bad data, or their inability to generalize beyond predefined parameters. A critical distinction is that machines lack lived experience, intuition, or a sense of consciousness—qualities that enable humans to navigate uncertain or novel situations effectively.
To truly advance AI and robotics (used interchangeably here), three significant challenges must be addressed:
1. Simulating Sensory Perception and Subjective Interpretation: Machines must be equipped to interact meaningfully with the real world. This requires processing sensory input and interpreting experiences in a way that mirrors subjective human understanding. Achieving this will necessitate a combination of advanced sensory systems and a framework for contextual comprehension.
2. Achieving Consciousness: Consciousness is one of the most profound and complex challenges in AI. It entails self-awareness, subjective experience, and the ability to reflect on existence. While increasing computational power may enhance performance, it is unlikely to solve the consciousness problem outright. If machines were to achieve consciousness, they would fundamentally redefine concepts of intelligence, agency, and morality.
3. Attaining Human-Level Dexterity: Physical dexterity and adaptability are essential for machines to perform tasks with the precision, flexibility, and problem-solving ability of humans. Despite significant progress in robotic hardware and control systems, achieving human-level dexterity remains a formidable hurdle.
If and when these challenges are overcome, the potential for machines to replace humans across all fields becomes a real possibility.
By the way - I also agree that a proactive plan is the necessary response to this shift, my use of Cowen's word "scared" isn't meant to inspire panic, but the kind of urgent alertness that leads to the very educational and institutional reforms you advocate for.
In my experience, panic and fear often go hand in hand, and they tend to drive decisions more than rationality or foresight. However, I'm not expecting significant progress anytime soon when it comes to meaningful reforms—be it in the education or government. I believe real change will only happen when the system is forced to act because it has no other choice.
The problem is deeply rooted in the political system in the United States. It is primarily focused on short-term goals, namely the next election cycle. Politicians rarely, if ever, engage in long-term planning. While laws and policies are often presented as forward-thinking, they are more often superficial gestures meant to create the appearance of progress rather than actual solutions.
True forward-thinking requires two critical abilities:
1. Prediction: The capacity to anticipate future challenges and opportunities.
2. Second-order thinking: The ability to consider decisions' ripple effects and long-term consequences.
Unfortunately, these skills are glaringly absent in most of the political class. Instead, decision-making is heavily influenced by lobbyists and special interest groups, which steer policies to serve their agendas rather than the public good. This creates a vicious cycle where politicians cater to powerful lobbies to maintain funding and influence, further undermining the potential for meaningful reform.
In short, we are stuck in a system prioritizing short-term wins over long-term solutions, and the influence of money and lobbying only exacerbates the problem. Until these fundamental issues are addressed, it’s hard to imagine a future where real progress is made. In other words, we are doomed. A meaningful change or reform will require collective action from the people, creating pressure that the system cannot ignore.
Here is a post published today. Talking about some of the same issues.
https://www.slowboring.com/p/politicians-need-to-take-ai-progress?utm_source=%2Finbox&utm_medium=reader2&utm_campaign=posts-open-in-app&triedRedirect=true
I wrote this recently. There is a huge flood coming because of AI. An epistemological fatigue will follow. Thomism can lead people back to Truth.
https://stilicho.substack.com/p/we-were-not-meant-to-know-everythingonly
pardon my poor manners
I can’t tell the guy from Jordan Peterson these days 😆