“If you are not scared of AI, you are not paying enough attention.”
Tyler Cowen begins his recent talk on the economics of AI with a confession:
“I've spent the last three years feeling like an idiot.”
Cowen is neither a technophobe, a Luddite or a hyper, but instead a deeply engaged user and observer of AI. His statement ‘feeling like an idiot’ is the unguarded remark of a man whose confidence in expertise has been quietly, steadily questioned. For Cowen, a lifelong economist, prolific best-selling author, close confidant of senior AI Lab executives and influential public intellectual, the incursion of AI into domains once protected by human judgment does not provoke indignation, it elicits awe.
Cowen makes his case through lived fragments, extensive personal interactions with a machine that has begun to speak fluently in domains he once thought human. A chatbot correcting his misconceptions about the Mayan language. A dinner menu translated mid-bite from a language he cannot name. His bloodwork assessed with clinical precision by a non-human diagnostician. These moments, he suggests, are not novelties but signals. They mark a transformation in how authority is experienced, not as embodied knowledge but as frictionless access. Expertise has not disappeared altogether but he fears it will soon.
“Smarter than Us”
AI, in Cowen’s view, is not yet omniscient, but it has become omnipresent. It is better than us, not always, but often enough, and in more domains than we are comfortable admitting. Its judgments are sometimes flawed, or maybe the way we ask it is. AI listens, advises, and learns, at speeds and scales that are alien to the rhythms of human cognition. The shift, he implies, is not from good to bad expertise, but from embodied to externalized cognition, from slow apprenticeship to instant competence.
This is where Cowen’s self-identification as an “idiot” acquires weight. He does not mean to glorify ignorance. He means to valorize receptivity. To be the idiot in the age of AI is to stand at the edge of one’s own knowledge and resist the impulse to retreat. It is not a capitulation; it is a discipline.
Career Agility
Yet this stance, what he calls the “idiot advantage,” requires more than intellectual flexibility. It demands moral stamina: the quiet courage to accept that knowledge is no longer a fixed asset but a moving target. In an economy no longer structured around predictable careers, to admit that one must reinvent oneself repeatedly is not a rhetorical flourish. It is an existential directive. The world Cowen describes is one where professions once thought immune to disruption, medicine, law, economics, must now justify their relevance not against other humans, but against machines. To meet that challenge is not simply a matter of skill acquisition; it is an ethic, a commitment to lifelong curiosity and learning.
For Cowen, the implications are deeply personal. He recalls choosing his path at age 13, and staying the course for fifty years. That linearity, he tells his audience, is vanishing. The new career arc is a Möbius strip, continuous yet disorienting, a loop that requires constant reorientation. The psychological toll of this shift is not incidental. It can evoke dislocation, anxiety, even loss. But it also offers, for some, a peculiar liberation, the chance to loosen the grip of professional identity and treat one’s life as a continuous draft rather than a finished script. The idiot advantage here includes not only cognitive agility but emotional resilience: a kind of awkward bravery, the capacity to remain supple under pressure, unfinished by design.
This ethic carries over into Cowen’s analogy of AI as a creature to be trained. Not commanded, trained. Like a dog. Or perhaps more precisely, like a horse: intelligent, responsive, but fundamentally other. The human role is not to dominate but to guide, to shape through repetition, observation, and subtle correction. This, Cowen implies, is the new form of expertise, not the stockpiling of knowledge, but the choreography of feedback.
This metaphor contains an under appreciated dignity. It suggests that working with AI is not clerical, but relational (Cowen's advice). The skill lies in knowing when the machine is hallucinating, when it is merely plausible rather than precise, and when its fluency conceals something ultimately inert, uncomprehending, unfeeling, and structurally indifferent. To know this is not to be smarter than the machine. It is to be attuned to a different order of intelligence, less encyclopaedic, more skeptical.
“If you are not scared of AI, you are not paying enough attention.” He adds: “If you are competing against AI you will lose.”
Cowen is certainly no romantic. He does not pretend that this transition will be equitable. The power to train a model, to understand its limits, to shape its output, these are privileges. Sam Altman’s prediction that billion-dollar companies will soon be staffed by a single individual and a swarm of AIs is not science fiction. It is already sketching the outlines of a new political economy. Scale is collapsing. Competence is becoming asymmetric. He believes that the most dexterous will soar; the rest will serve or recede.
Cowen acknowledges this without sanctimony. The idiot advantage may offer hope, but it cannot erase structural asymmetries. It may teach us how to learn again, but it does not guarantee access to the tools of learning. The ability to pivot is itself a form of capital. The moral question becomes not only how to navigate this shift individually, but how to ensure that the capacity for adaptation does not become a luxury good. In a world where leverage is unevenly distributed, the idiot must also become an institutional designer, an architect of systems that prioritize responsiveness over rigidity, open learning loops over hierarchical gatekeeping, and communal resilience over winner-take-all competition. Or as the Microsoft software engineer Brian Krabach writes, shift your mindset to “What can I do now that I couldn’t before?”
Silicon Mentor?
Still, Cowen sees in education a place to begin. He no longer treats AI as a threat to pedagogy. He requires his students to use it. The classroom he imagines is not a sanctuary from disruption, but a rehearsal space for it. The goal is not to outsmart the machine, but to learn how to ask it better questions. What emerges is a form of meta-literacy, a capacity not merely to consume AI-generated content, but to interrogate its assumptions, test its boundaries, and refine its purpose.
Yet even here, shadows persist. If the interlocutor is silicon, what becomes of mentorship? If judgment is a distributed function, how do we teach discernment? These are not rhetorical flourishes. They are open questions. But perhaps the future of education will depend less on answers and more on attunement, on cultivating in students the sensibility to recognize when something doesn’t feel right, when plausibility masks incoherence. That kind of discernment may require returning to what AI cannot yet replicate: embodied intuition, tacit knowledge, moral hesitation, the tacit intelligence of lived experience. Perhaps attunement will come not just from sharper minds but from more attentive bodies.
Geopolitics
When Cowen turns to geopolitics, his tone tightens. Small countries, he warns, will not build their own AI systems. They will choose, American or Chinese. The implications are not just technological but cultural, epistemic, and moral. The new colonialism will be quiet. It will arrive through licensing agreements and user interfaces. It will distribute knowledge while consolidating influence.
I am not so sure, perhaps, the idiot advantage holds here as well. Not as capitulation, but as agility. For nations too small to shape the tools, there remains the freedom to choose how to use them, improvisationally, selectively, subversively. Sovereignty, in this framing, becomes a function of epistemic nimbleness: the cultivation of a populace capable of switching stacks, of reading the politics of code, of mastering tools that were not built for them but can be repurposed nonetheless. Epistemic resilience becomes a national strategy, one built not on monumental infrastructure, but on distributed literacy, local experimentation, and rapid adaptation.
Cowen ends with a kind of deadpan optimism. Group chats, he notes, are less ridiculous. Nonsense is easier to spot. In a world saturated by information, the fear of being called out has imposed a kind of ambient rationality. This may be a modest gain, it is certainly not trivial.
Overall, he is under no illusion. The same tools that fact-check can fabricate. The same systems that clarify can distort. What matters, then, is not just what AI can do, but what we choose to do with it, and whether we retain the patience to remain idiotic long enough to learn what it truly can help with. That patience, I would add, is not mere tolerance of confusion. It is a deliberate rejection of premature certainty. It is, perhaps, the only remedy we have for epistemological disequilibrium.
This is where Cowen’s talk leaves its most important work undone. It is not a forecast. It is a provocation. He is not mapping the future. He is inviting us to stumble into it, unsteady but alert. We must bring institutions that adapt, values that flex without breaking, and vocabularies that stretch to describe realities we have not yet named, and that perhaps only the idiot, in all their awkward bravery, is prepared to face.
Those most at ease with their own fallibility may find themselves best equipped, not simply to survive, but to continuously shape the terms of artificial intelligence.
Tyler's proposed timeline is questionable. While I have worked with several major banks that have barely scratched the surface of AI implementation, many others, such as the Spanish bank BBVA, JP Morgan, Morgan Stanley, ING, Citi, and Bank of America, have thousands of AI processes operational. My belief is not in a “coming wave’ within two years, but in a steady, purposeful increase in AI process operations over the next decade. However, let’s face it, in ten years there will be a major shift in employment roles as AI becomes entrenched in organizations.
The length of tasks (measured by how long they take human professionals) that generalist autonomous frontier model agents can complete with 50% reliability has been doubling approximately every 7 months for the last 6 years. The trend predicts that in under a decade we will see AI agents that can independently complete a large fraction of tasks in hours that currently take humans days or weeks.
Tacit Knowledge
I would have liked Cowen to speak more of tacit knowledge. As large language models proliferate across offices and industries, there is a quiet assumption that competence can be simulated, and thus substituted. But AI, for all its linguistic virtuosity, operates without a body, without memory in the human sense, without judgment shaped by apprenticeship. It can answer questions, but it cannot yet experience difficulty. It can generate, but it does not struggle as we humans do.
What AI lacks is not data but disposition. It does not inhabit the world; it calculates and prescribes from a distance. And yet, as the workplace and our institutions increasingly integrate these tools not as assistants, but as surrogates, they will substitute their pattern recognition for human decision-making. Here, the lesson I believe is not that we should resist AI, as Tyler rightly says, but that we should anchor its use in the irreducible tacit knowledge of human practitioners. The craftsman with decades of attention behind their choices, the nurse whose diagnostic hunch has been tempered by thousands of bodies and hours, these are not forms of knowledge that AI can ingest. But they can shape the way AI is deployed.
Tacit knowledge can serve as a buffer against the worst tendencies of automation: its speed without reflection, its abstraction without context. In the hands of someone trained not only to use but to interpret, AI can become an extension of skill rather than a replacement for it. The question is not whether machines will know, but whether we will remember what knowing demands.
Skill is passed not in torrents of information but in conversations, gestures, contexts, and shared attention.
Stay curious
Colin
Tyler’s video talk is here - The Economics of Artificial Intelligence | Tyler Cowen
Tyler Cowen - Don’t Fall Behind
Very interesting read but the truth is it is highly unlikely that humans and ASI will co-exist so working towards that goal, which is the GOAL, is suicide.
Also, when the software pushes out wrong information it isn’t a “hallucination”, it is an error, a mistake. Trying to mitigate this fact reveals the hubris and ultimate idiocy behind this entire endeavor.
“A deliberate rejection of premature certainty” is wise. Thanks for this.