The erosion of expertise
Last week I ran an AI training program for a major financial institution. Near the end of the program I was asked about my biggest fears around AI. I was able to answer immediately: my biggest fear is the dumbing down of society.
What happens to a society when intelligence itself becomes a commodity? That is the question posed throughout the National Academy of Sciences 2025 report, Artificial Intelligence and the Future of Work. The work is not prophecy, nor should it be mistaken for one of Silicon Valley's breathless manifestos. It is, rather, a sober, meticulous reckoning with the ambiguous, disquieting, and often paradoxical forces unleashed by the rise of AI. Strategic, unvarnished, and disturbingly persuasive.
The authors are not alarmists, but their findings demand our attention. The committee, featuring renowned researchers such as Erik Brynjolfsson, David Autor, Tom Mitchell, and others remind us that AI, as a general-purpose technology, joins the ranks of electricity and the steam engine, tools that did not merely make us faster but rewrote the coordinates of productivity.
The report notes, “AI is advancing exceptionally rapidly,” reflecting key breakthroughs in transformer models and the rise of LLMs. This is not just another wave. It is the undertow.
Prompted by the 2021 National Defense Authorization Act and building on the 2017 report Information Technology and the U.S. Workforce, this study arrives at what the committee terms an “inflection point.” LLMs like GPT-4 have passed AP exams, the Uniform Bar Exam, and now generate text, write code, and offer strategic advice across domains. The committee identifies this as a “major new surge of progress” in AI capabilities.
The trajectories that AI-enabled futures might take can lead to outcomes of pro- found benefit or significant disruption. The goal of this report is thus twofold: to responsibly inform about the current state and capabilities of AI as they relate to the workforce and to offer insights that prepare us for the challenges ahead and opportunities that will arise. It also considers how AI is likely to augment human labor, reshape job markets, and influence workforce dynamics.
The report's first significant contention is as clinical as it is disturbing: AI will upend work, but not in the ways most people expect. Forget the apocalyptic imagery of robotic pink slips. The more insidious threat is subtler: the erosion of expertise. Not elimination, but attrition. As the report states (and something which I have been raising for some time),
“AI is likely to substitute for human expertise, eroding the value of such expertise”
…in areas such as summarizing legal documents, preparing tax returns, programming, or writing reports, business process automation and many other domains.
Historical parallels abound. When the factory displaced the artisan, it did not destroy labor outright; it fragmented it, subordinated it. The loom did not kill the weaver. It made him or her an operator. Likewise, as AI infiltrates knowledge work, it does not replace the paralegal, the programmer, or the analyst. It makes them assistants to their algorithmic tools. As the report recalls, the shift from artisanal to industrial to digital labor stranded valuable skills. More than 60% of current employment is in occupations that did not exist in 1940. That transformation was not benign.
The report recognizes a profound irony: AI's greatest promise is also its greatest peril. It could augment human capabilities, “enabling workers to use their expertise more effectively,” as Finding 7 puts it, or it could be used to surveil, deskill, and disempower. Whether AI is used to replace or to enhance labor is, as the committee states, is a choice. “Society has a choice in whether and where AI is used to augment human expertise versus substitute for it.” I think this is rather optimistic, profit is the motivating factor for corporations and efficiency for governments. Both of which AI promises.
Education and Work
This is perhaps the central theme of the report: the future of work is not written in code, but in policy. Technology sets the stage, but institutions write the script. The text devotes particular attention to how education must change to keep up.
“AI will have significant implications for education at all levels, from primary education, through college, through continuing education of the workforce. It will drive the demand for education in response to shifting job requirements, and the supply of education as AI provides opportunities to deliver education in new ways.”
It is not simply a matter of reskilling, but of reimagining education as a lifelong project, with AI both transforming what we teach and how we teach it. There is cautious optimism here, about personalized learning, adaptive systems, and intelligent tutoring like Khanmigo. But there is realism, too: without access, without equity, the AI revolution in education risks becoming just another chapter in the long history of opportunity hoarded.
Measuring the Impact
The report's insistence on better measurement is both technical and moral. If we cannot track AI's impacts in real time, how can we possibly respond to them? The authors advocate for new public-private partnerships, for metrics that capture not just GDP but “task substitution, expertise depreciation, and wage polarization.” This is not mere technocratic concern; it is, as the authors argue, an epistemological imperative. Whilst Anthropic have started to provide some insight, this is not neutral. You cannot govern what you cannot see.
Where the report departs from the usual genre of future-of-work white papers is in its intellectual clarity. It makes no promises of utopia, nor does it indulge in dystopia. It acknowledges deep uncertainty, “wide error bands and a range of contingencies.” while refusing to pretend that markets alone will yield just outcomes. As one passage bluntly puts it,
“Absent well-functioning institutions, the committee does not presume that market outcomes will be socially desirable ones.”
And yet, the report is not without hope. In Chapter 7, it enumerates dozens of concrete actions under the heading “Opportunities to Influence.” These include investing in explainable AI research, funding AI for high-social-value sectors like education and health, creating public-private data infrastructures, and supporting worker mobility through short-term retraining. It even dares to imagine a “career roadmap,” a continuously updated guide for navigating job transitions in the AI era.
Expertise
The stakes, as the report reminds us, are enormous. The disruption is already underway. And yet the institutions that should be leading this transition, governments, universities, labor unions, civic organizations, remain hobbled by inertia or ignorance. The danger is not just technological displacement. It is societal paralysis. But paralysis is not inevitable.
The report strongly reminds us:
The future impact of AI on the demand for expertise is uncertain, but three plausible scenarios emerge.
First, AI could accelerate occupational polarization, automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.
Second, AI might advance to outcompete humans across nearly all domains, greatly reducing the value of human labor and creating significant income distribution challenges.
Third, a more speculative scenario envisions a future in which the demand for expertise borrows attributes from both elite and mass expertise, leading to a reinstate- ment of the value of mass expertise in new domains.
Let me address these:
Scenario 1: AI could accelerate occupational polarization, automating more nonroutine tasks and increasing the demand for elite expertise while displacing middle-skill workers.
This scenario feels like the most probable extension of current trends, at least in the short to medium term. We have already witnessed decades of technology-driven polarization where routine middle-skill jobs have been hollowed out, while demand for high-skill, often “elite,” expertise and low-skill service jobs has grown. AI, particularly with its advancing capabilities in handling tasks previously considered nonroutine (like aspects of legal work, coding, or financial analysis), seems poised to accelerate this. The “displacement” of middle-skill workers could become more acute as AI encroaches further into cognitive tasks that were once the domain of a broader professional class. The demand for “elite expertise,” those who can leverage AI at the highest levels, innovate with it, or manage uniquely complex, human-centric problems that remain beyond AI's grasp, would likely intensify.
This path risks further entrenching income inequality and could shrink the “middle” of the labor market, making upward mobility more challenging for many. It underscores the report's emphasis on the need for policies that support worker transitions and potentially rethink how “value” is distributed when productivity gains are driven by AI benefiting a smaller segment of the workforce. Education and retraining would need to adapt rapidly to equip people for either the highly elite roles or the new types of work that might emerge around AI.
Scenario 2: AI might advance to outcompete humans across nearly all domains, greatly reducing the value of human labor and creating significant income distribution challenges.
While a technologically fascinating and existentially concerning prospect, I think this scenario seems less likely in the immediate future. Current AI, for all its advances, still struggles with true general intelligence, common-sense reasoning in novel situations, physical dexterity in uncontrolled environments, and deeply nuanced human interaction and emotional intelligence. However, “nearly all domains” is a strong claim. Even if AI outcompetes humans in a majority of current economic tasks, the societal upheaval would be immense. The reduction in the economic value of human labor would fundamentally challenge our current socioeconomic systems, which are largely built around labor as the primary means of income distribution.
If this scenario were to begin materializing, the income distribution challenges would be paramount. Discussions around universal basic income, redefined social safety nets, and new frameworks for societal contribution and reward would move from academic debate to urgent necessity. The very definition of “work” and human purpose in an economic sense would need to be re-evaluated. This scenario highlights the most profound, long-term philosophical and governance questions AI poses.
Scenario 3: A more speculative scenario envisions a future in which the demand for expertise borrows attributes from both elite and mass expertise, leading to a reinstatement of the value of mass expertise in new domains.
This is the most hopeful, and perhaps the most actively “shapeable,” of the three scenarios. It suggests a pathway where AI doesn't just bifurcate the labor market but potentially enriches it by creating new roles that blend human skills with AI augmentation. The idea of “translational expertise,” as discussed in the report, where individuals use AI to perform tasks that previously required deeper, more specialized (elite) knowledge, is compelling. This could mean a broader range of workers are empowered by AI tools to take on more complex, more valuable work, without necessarily needing the years of traditional training currently associated with elite professions. Think of a paramedic equipped with AI diagnostic tools performing tasks closer to that of an ER doctor in specific situations, or a small business owner using AI for sophisticated market analysis previously only accessible to large corporations.
This scenario offers a path to more inclusive growth and could potentially mitigate the extreme polarization of the first scenario. However, it heavily relies on intentional choices: designing AI for human augmentation rather than pure replacement, significant investment in new forms of education and lifelong learning focused on these new hybrid skills (critical thinking, AI collaboration, judgment in the loop), and workplace reorganization that embraces these new human-AI teams. It speaks to the report's core message that the future isn't just something that happens to us; it's something we can actively build through policy, investment, and a focus on human-centric AI development.
In essence, while the first scenario appears to be the default trajectory based on current trends, the third offers a more optimistic vision that requires conscious effort and strategic intervention to realize. The second scenario, though less probable in the near term, serves as a crucial reminder of the transformative power we are dealing with and the need for long-range foresight.
Questions
In the end, the report offers no panaceas. What it does offer is a map of the terrain and a set of questions we cannot afford to ignore.
What forms of expertise are we willing to preserve?
What kinds of work do we value?
Who gets to decide whether AI is used to empower or to exploit?
There are no easy answers. But there is, in these pages, a call, not to arms, but to vigilance. As the report concludes:
“Policymakers, business leaders, AI researchers, employers, and workers all have an opportunity to shape the future...in ways that are consistent with societal values and goals.”
The report repeats many of the discussions I have had on Substack with recent posts. In a world where machines can mimic the mind, what will distinguish the human? That is the question we must now ask. And answer.
Stay curious
Colin
I found this to be an interesting and engaging article summarising the state of play with AI, and with an emphasis on where the rubber hits the road - ie: the world of work, and the knock-on effects of majorly changing that world of work.
As I've said in earlier comments, Technology is a form of power (like the emerging steam engine in its time). The current power-structures are seriously asymmetrical in relation to the populace they are suppose to serve and govern, and that imbalance of power worsening. So the likely outcome will be 'more of the same', with current power-brokers doing their best to appropriate the power of AI for whatever are their agendas.
However, I believe there are a sufficient number of savvy people amongst the masses who will find ways the use AI in a life-enhancing way to maybe, just maybe, create localised decentralised economies where the old-order top-down structures simply become irrelevant.
Menace or paradise? Neither nor. In the very long run there will be another big thing after AI, perhaps in 25, 50 or 100 years.
The discussion reminds me a little of the early years of tv. Edward Murrow stated, it is our task to make tv clever or just leave it like it is a dump box with lights and cables. This Chianti’s offer chances. I’d like to put forward to theses: 1. There is no useful collective prediction or development. It all dependents - locally, regarding different groups, educational levels, fields of expertise. 2. Education could and should change dramatically - with or without AI, especially with AI. We should have better, decentralized education, with far greater variety, specialization and competitive approaches. That includes different personnel and different educational careers.
Finally, analytics as a prerequisite for AI will become even more important.