“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” ~ Joseph Weizenbaum on his early AI system
A cautionary tale about the human brain and need for connection with the artificial.
How we think, we think
Metaphors quietly shape our thinking, slipping into our minds without permission, influencing how we define intelligence and everything else. They promise clarity but often lead us into comfortable illusions. Engaging with metaphors means grappling with a strange irony, our pursuit of understanding is guided by the very abstractions that can mislead us.
In his seminal essay, “Politics and the English Language” George Orwell warns :
“But if thought corrupts language, language can also corrupt thought.”
Orwell’s caution signals a deeper reality: metaphors wield power precisely because they define conceptual landscapes, subtly determining the boundaries of possible thought.
In my own wrestling with understanding intelligence, the ever-elusive concept, I have come to recognize how deeply metaphors infiltrate even the most rigorous scientific dialogues. Early in my research, enamored with the metaphor of the brain as a ‘computer,’ I painstakingly tried to model human cognition through algorithms and data streams, believing I was capturing the essence of thought following Turing’s suggestions.
But soon enough, the metaphor began to strain, its limitations glaringly evident whenever my models failed to replicate even basic human intuition or creativity. It is an irony that intelligence often succumbs to metaphorical description: we speak of minds as ‘computers,’ brains as ‘hard drives,’ memory as ‘cache’, consciousness as ‘software.’ But such language, alluring in its clarity, risks imprisoning thought. Our metaphors are convenient shortcuts, yet they obscure as much as they reveal.
Imitation Game
The mid-twentieth century provides an instructive moment. Alan Turing’s ‘Turing Test,’ elegantly simple in premise yet profoundly influential, encapsulates the seductive danger of metaphor. The idea that intelligence can be ‘tested’ through imitation, by indistinguishably mimicking human behavior, set generations of researchers on quests fueled by metaphorical promises. One might argue that such metaphors, despite their limitations, remain essential tools for making complex concepts accessible. Yet, this was intellectual alchemy: the dream of transmuting imitation into authentic cognition, spurred on by the powerful image of a machine seamlessly masquerading as a human.
Professor Joseph Weizenbaum
The history of AI tells cautionary tales. Joseph Weizenbaum’s ELIZA, a deceptively simple psychotherapy bot from the 1960s, startled even its creator with its conversational prowess. Users confided in ELIZA intimately and were emotionally moved by its responses, despite its lack of genuine understanding. Weizenbaum later lamented his creation, recognizing metaphor's seductive power in conflating apparent responsiveness with genuine comprehension. His words carry an enduring caution:
“What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
Weizenbaum unsettled by the ease with which people attributed human-like intelligence to a rudimentary program, warned that
“no other organism, and certainly no computer, can be made to confront genuine human problems in human terms.”
He saw in ELIZA a troubling reflection of our susceptibility to the illusion of understanding, where metaphor creates an intellectual mirage. His deeper concern was not with ELIZA, but with the willingness of people, including scientists, to anthropomorphize machines, a tendency he feared could lead to a dangerous erosion of what it means to be human.
“The myth that computer understanding is the same as human understanding,” he cautioned, “is one of the most dangerous illusions of our time.”
The story goes that Weizenbaum turned his machine off when his secretary requested some time with Eliza and asked Weizenbaum to leave the room. “Some subjects have been very hard to convince that Eliza is not human,” Weizenbaum said. “I believe this anecdote testifies to the success with which the program maintains the illusion of understanding,” he noted.
In 1976, he published Computing Power and Human Reason: From Judgment to Calculation, which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions. He believed that the simulation of intelligence, rather than intelligence itself, was enough to fool people.
Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.
Intelligence is more than computation
Metaphors have not only defined intelligence but also distorted our visions of its possibilities and perils. Today, as Artificial Intelligence grows breathtakingly complex, surpassing in ways Turing himself might have scarcely imagined, we find ourselves perilously dependent on the same seductive metaphors. The ‘neural network,’ for instance, conjures images of interconnected neurons, effortlessly implying biological parallels. Yet beneath these comforting linguistic cloaks lie mathematical constructs, devoid of genuine biological neural mechanisms such as synaptic plasticity, adaptive neuronal connectivity, or organic learning processes deeply rooted in evolutionary biology.
This tension between metaphorical promise and its inherent constraints becomes even clearer through the story of Marvin Minsky.
Minsky, revered pioneer in Artificial Intelligence, famously dismissed emotion and subjective experience as inconvenient ‘bugs’ in the computational machine of the mind. His metaphor was powerful yet reductive, enabling incredible leaps in AI research but also dangerously narrowing our vision of human cognition. By treating intelligence purely as computation, Minsky risked reducing our understanding of humanity.
Thus, metaphors are double-edged. They inspire leaps of genius but also enforce conceptual boundaries. They liberate yet constrain; clarify yet distort.
Brain Rules
Richard Rhodes, chronicler of humanity’s nuclear ambitions, reminds us poignantly of the cost of metaphor run amok. When Robert Oppenheimer invoked Vishnu’s terrifying line from the Bhagavad Gita at the Trinity Test,
“I am become Death, destroyer of worlds,”
… the metaphor crystallized human ingenuity’s awful duality: creation entwined irrevocably with destruction. Intelligence, like atomic energy, is forever shadowed by the metaphors it chooses to live by.
Douglas Hofstadter, in his Pulitzer prize winning masterpiece Gödel, Escher, Bach, reveals how intelligence might be little more than:
“an emergent phenomenon rooted deeply in metaphor.”
Similarly, George Lakoff's groundbreaking work highlights how deeply cognition is shaped by metaphorical constructs. In his classic book, Metaphors We Live By, he writes:
“Because we reason in terms of metaphor, the metaphors we use determine a great deal about how we live our lives.”
Lakoff argues that metaphors such as ‘argument as war’ (“He attacked every weak point in my argument”) or ‘time as money’ (“I spent an hour of my time”) profoundly influence not just language, but the very structures of our thinking and reasoning processes. Both Hofstadter and Lakoff emphasize that the brain thrives on analogies and metaphors, persistently relating and translating experience, thus constructing intricate layers of meaning that define our perception of reality.
Cognitive Offloading
I am currently engaged in a project concerning cognitive offloading to AI, the early research has shown me that our mental conceptual tools are powerful yet perilous, poetic yet potentially misleading. We must question relentlessly, probing beneath metaphorical surfaces, resisting intellectual complacency. Only then, perhaps, can we hope to grasp the elusive reality of intelligence and what we give up to the machines.
The cautionary tale of ELIZA and Weizenbaum’s warnings serve as reminders that metaphors can obscure as much as they reveal. If we uncritically accept them, we risk mistaking imitation for understanding, projection for truth, and analogy for reality.
“Please tell me your problem” was the opening prompt from ELIZA, it could not only receive input in the form of natural language, it gave the “illusion of understanding”, just as our AI’s do today. The metaphors we construct and the machines we engage with shape the way we live - choose wisely.
Stay curious
Colin
Great piece! You highlight how metaphors shape thought, but I'd push further—metaphors don’t just frame thinking, they create self-reinforcing feedback loops that shape behavior and reality. When we frame AI as intelligence, we don’t just misinterpret it; we build systems and policies that reinforce the illusion. The real challenge isn’t just critiquing flawed metaphors but introducing better ones—perhaps AI as a mirror that reflects biases rather than a "brain" that thinks. What metaphor do you think could break the current bind?
I'm a huge Orwell fan (my handle says it all) and I did read that awesome essay. Orwell knew what he was talking about, and everything he wrote was prescient to the times we're living in some seventy odd years later.
With that being said, guilty as charged. I too have succumed to comparing the brain to a computer. I think part of the reason is that computers were designed from the start to offload the more repetitive tasks requiring "brain power", AKA, calculations, which they excel at. It was a simple emulation. Once this was accomplished, then came the inevitable escalation. If we can get machines to perform the repetitive tasks, maybe we can coax them to perform some analysis too. And on and on.
Comparing the brain to a computer might be called reverse anthropomorphizing. We have a tendency to anthropomorphize everything. Bugs Bunny predates ENIAC by a decade. Have you ever seen a rabbit walk on its hind legs and talk? Me neither.
So, we resort to metaphors.
It's easy to compare a brain to a computer because computers were designed from the start to emulate what the brain does. However, we kid ourselves if we think we can come even close to the real complexity of this amazing machine we call a brain. Even modern brain science, armed with fMRI's and PET scans still doesn't have a complete grasp of how the brain/mind works. Its function is dependent on physical structures at every level - microscopic to macroscopic. It's dependent on a delicate balance of neurotransmitters. Most of all, it's dependent on electricity - something that makes the metaphorical temptation that much greater.
Transistors as neurons? Well, not quite.
It's just as easy to compare our senses to input devices. This is one area where AI falters. We can connect a camera so it can "see". We can connect a microphone so it can "hear". We can even attach tactile sensors so it can "feel" and chemical detectors so it can "smell" and "taste". None of these devices match what our senses do instinctively - what they evolved to do over millions of years of evolution.
Computers can't have an "instinct to survive" although we can emulate it and get it to act out as if it really had this. But it's just an emulation.
In closing - before I start babbling like a ChatGPT hallucination - as I've often stated, the gravest danger of AI is people treating it as real, when it's far from it.