Great piece! You highlight how metaphors shape thought, but I'd push further—metaphors don’t just frame thinking, they create self-reinforcing feedback loops that shape behavior and reality. When we frame AI as intelligence, we don’t just misinterpret it; we build systems and policies that reinforce the illusion. The real challenge isn’t just critiquing flawed metaphors but introducing better ones—perhaps AI as a mirror that reflects biases rather than a "brain" that thinks. What metaphor do you think could break the current bind?
Thank YOU. Yes, you are right, that's a brilliant and a crucial point about the self-reinforcing nature of metaphors. I completely agree that they don't just frame thought, but actively shape our actions and the systems we build.
Your point on AI as a mirror is right, which highlights the importance of accountability and critical reflection, rather than simply projecting human-like intelligence. I dislike the word 'intelligence' in AI- John von Neumann originally called it an artifact but he was in hospital during the Dartmouth meeting and John McCarthy who coined artificial intelligence, had a problem with Cybernetics and a strong dislike of Norbert Wiener, who was close to JvN - so unfortunately AI stuck. I am sure JvN would have changed that had he survived.
As for a metaphor to break the current bind, I'm still pondering that. the one that calls AI as a 'tool for amplification'. Amplifying both our strengths and weaknesses, is overused and could be better. Although, it emphasizes our agency while acknowledging AI's profound impact.
I have a project at uni where students are building metaphors connected with Ai and hope to write about it soon - they have some fun ideas (ai is like childbirth: once it is underway it cannot be called off or put on hold ), but nothing concrete. Do you have any ideas?
It is indeed thought-provoking if "Artificial Intelligence" as a label/metaphor had never gained traction, and all what is now under the name AI was under the name 'Cybernetics' instead. People would relate to it quite differently.
I agree Joshua. Naming conventions are so important. JvN actually presented his ideas in 1947 to 1949 and then posthumously, but McCarthy wanted to 'own the space' and his name is now in the halls of history. Otherwise, his contributions pale into insignificance and indeed may be part of the problem!
The "rivalry of the names" is just normal human behavior, but it highlights the same pattern: rationality is always shaped by human binds and never exists outside of this frame. I like the mirror idea because it reflects the user and their binds. Almost all of a user’s information and research ultimately serve to reinforce those binds. By eliminating "unwanted information" as a form of noise, AI helps users stay inside their schismogenesis.
I have a few ideas for other metaphors, but I want to bring in Kafka—specifically his very short story On Metaphors, where he explains the role of the "wise man": to create metaphors that eventually enlarge the map with new territories. So:
AI as a trickster – Not intelligence, not a tool, but an unpredictable agent that distorts, disrupts, and exposes human illusions. This aligns with how LLMs operate—they don’t think, but they reveal patterns in ways that confuse and challenge us. I'm already work on and with jAIster and this trickster is already brilliant!
AI as an echo chamber – Instead of "thinking," AI reflects collective human inputs back at us, reinforcing and sometimes distorting our existing biases. This makes AI less of a mystical intelligence and more of a feedback loop amplifier.
AI as a parasite – Not in a negative sense, but in the way symbiotic organisms live off their hosts while shaping them in return. AI is fed by human data and, in turn, alters human thought patterns, creating a co-evolutionary process.
Your students' "AI as childbirth" metaphor is intriguing—it suggests irreversibility, which is true in terms of AI integration, but I wonder if it might overemphasize determinism rather than adaptability.
Great comment, incredibly stimulating, thank you! The connection to Kafka's 'On Metaphors' is a brilliant addition, and it truly elevates the essay/discussion. The idea of the 'wise man' creating metaphors to expand our understanding is powerful - although sometimes misused.
Your metaphor suggestions. 'AI as a trickster' is a stroke of genius. It perfectly encapsulates the way LLMs operate, revealing patterns and challenging our assumptions in unpredictable ways. I will check out jAIster.
The 'AI as an echo chamber' metaphor - totally agree with you. And 'AI as a parasite' provides a fascinating perspective on the symbiotic, co-evolutionary relationship between humans and AI. That's such a deep idea and absolutely right
'AI as childbirth' You've rightly identified the potential for it to overemphasize determinism, which is a crucial consideration, although many believe its too late to put it back in the box.
AI is for sure reinforcing 'schismogenesis' by eliminating 'unwanted information.'
Thank you for sharing these rich and thought-provoking ideas - I will take them to the students, and for me, think much deeper on your 'AI as a parasite', that is such a truth!
I'm a huge Orwell fan (my handle says it all) and I did read that awesome essay. Orwell knew what he was talking about, and everything he wrote was prescient to the times we're living in some seventy odd years later.
With that being said, guilty as charged. I too have succumed to comparing the brain to a computer. I think part of the reason is that computers were designed from the start to offload the more repetitive tasks requiring "brain power", AKA, calculations, which they excel at. It was a simple emulation. Once this was accomplished, then came the inevitable escalation. If we can get machines to perform the repetitive tasks, maybe we can coax them to perform some analysis too. And on and on.
Comparing the brain to a computer might be called reverse anthropomorphizing. We have a tendency to anthropomorphize everything. Bugs Bunny predates ENIAC by a decade. Have you ever seen a rabbit walk on its hind legs and talk? Me neither.
So, we resort to metaphors.
It's easy to compare a brain to a computer because computers were designed from the start to emulate what the brain does. However, we kid ourselves if we think we can come even close to the real complexity of this amazing machine we call a brain. Even modern brain science, armed with fMRI's and PET scans still doesn't have a complete grasp of how the brain/mind works. Its function is dependent on physical structures at every level - microscopic to macroscopic. It's dependent on a delicate balance of neurotransmitters. Most of all, it's dependent on electricity - something that makes the metaphorical temptation that much greater.
Transistors as neurons? Well, not quite.
It's just as easy to compare our senses to input devices. This is one area where AI falters. We can connect a camera so it can "see". We can connect a microphone so it can "hear". We can even attach tactile sensors so it can "feel" and chemical detectors so it can "smell" and "taste". None of these devices match what our senses do instinctively - what they evolved to do over millions of years of evolution.
Computers can't have an "instinct to survive" although we can emulate it and get it to act out as if it really had this. But it's just an emulation.
In closing - before I start babbling like a ChatGPT hallucination - as I've often stated, the gravest danger of AI is people treating it as real, when it's far from it.
Wonderful, thank you - this is the kind of insightful reflection I hoped my essay would spark. I gathered that you were an Orwell fan:-) His work is a constant source of inspiration and warning.
Your point about the 'brain as computer' metaphor being reverse anthropomorphism is brilliant. It perfectly captures how we project our understanding of machines onto the complexities of human cognition, which few seem to grasp. You are right to bring up the historical context, about computers being designed to offload brain power, this adds a valuable layer to this idea, thanks for that.
Hmm the limitations of emulation, especially in replicating our senses and instincts, is excellent - are you familiar with Robin Hanson's work on Em's? (The Age of EM). Yes, that 'instinct to survive' example is particularly compelling, which highlights the fundamental gap between algorithmic behavior and genuine biological complexity I hoped to convey - well connected.
We should have global literacy lessons with an emphasis on the danger of treating AI as 'real.' It's a crucial point that often gets lost in the hype. Your fear is very real, and one I share.
The difference between a computer and a brain is the latter can be used as a tool to understand and navigate the mundane world, but it also acts as an antennae to the Soul which serves as its proper function.
This is a brilliant piece. It opened my mind about how metaphors shape our thoughts, and understanding of the world. Would like to know the ways to understand things without metaphors, probable solutions to overcome these biases in your next piece. Thank you
This is a great article. Personally, I recognize how my mind forms models to conceptualize reality because reality is too complex to understand. Such mental models can be so complex that it is difficult to communicate them to others. Metaphors that mirror parts of the model seem relevant and reinforce the model. Then the metaphor becomes an elegant and easy way to communicate part of the model. But the metaphor is just a shadow of a subsection of the model. Therefore, I think a mental model is a more accurate and useful label than metaphor to describe how we think. Metaphor is just a way of communicating a concept of the model.
Thank you for your excellent comment. I completely agree that our minds construct complex mental models to navigate reality, and that metaphors often serve as simplified representations of specific aspects of those models.
Brilliant - I like the analogy of metaphors as 'shadows' of mental models, this captures the idea that metaphors are selective and partial representations, designed to communicate specific concepts within a larger framework. I had not thought of it as an elegant and efficient way to share parts of our mental models with others. I appreciate you highlighting the importance of distinguishing between these two related but distinct concepts.
I really enjoy reading your work. I have a few other people for you to consider reading. Daniel Kahneman, particularly “Thinking, Fast and Slow,” work by the neuroscientist David Eagleman, and the work of Robert Sapolsky. They all come from non-computer fields, but they are well-respected scientists who are knowledgeable about humans and the process of thinking. Thank you for taking the time to share your thoughts with those of us who read them. Your students are lucky to have you as their guide.
I knew Danny Kahneman personally, he and I walked and talked everyday for 3 months at one stage during my doctorate, my last correspondance with him was 2 months before he passed away. He was an incredible human being. His book and talks are a testimony to his insights.
I have one David Eagleman book, Livewired, which is excellent. I've watched a few of his videos/interviews which are always deeply insightful.
Sapolsky is the master. His book 'Behave' is a fountain of wisdom. I kind of disagree with his recent book Determined, but only because he says we do not have free will, yet I understand his thesis of culture and genes. I have not read his book on stress (Zebra) but should. Thanks so much for the encouragement and kind words - I'm also blessed to have students that push me and great feedback from readers.
This was excellent. Our minds really do shape reality through subconscious “alignment” with metaphor. It strikes a deep emotional chord that we are not aware of. It makes we wonder if the dopamine roller coaster we can get addicted to with infinite scroll and social is similar biologically.
I completely agree – the subconscious 'alignment' with metaphors is a powerful and often overlooked aspect of how we perceive reality.
Your Point about the dopamine roller coaster of infinite scroll and social media is incredibly insightful, which I had not connected very well. It's true, these platforms are designed to exploit our emotional responses, often through the use of carefully crafted language and imagery – in essence, metaphors. This definitely raises questions about the biological similarities between our natural responses to metaphors and the addictive patterns fostered by digital environments. Incidentally, according to a study, an estimated 210 million people worldwide suffer from addiction to social media and the internet. https://lsa.umich.edu/psych/news-events/all-news/archived-news/2014/05/is-social-media-dependence-a-mental-health-issue.html
Great piece! You highlight how metaphors shape thought, but I'd push further—metaphors don’t just frame thinking, they create self-reinforcing feedback loops that shape behavior and reality. When we frame AI as intelligence, we don’t just misinterpret it; we build systems and policies that reinforce the illusion. The real challenge isn’t just critiquing flawed metaphors but introducing better ones—perhaps AI as a mirror that reflects biases rather than a "brain" that thinks. What metaphor do you think could break the current bind?
Thank YOU. Yes, you are right, that's a brilliant and a crucial point about the self-reinforcing nature of metaphors. I completely agree that they don't just frame thought, but actively shape our actions and the systems we build.
Your point on AI as a mirror is right, which highlights the importance of accountability and critical reflection, rather than simply projecting human-like intelligence. I dislike the word 'intelligence' in AI- John von Neumann originally called it an artifact but he was in hospital during the Dartmouth meeting and John McCarthy who coined artificial intelligence, had a problem with Cybernetics and a strong dislike of Norbert Wiener, who was close to JvN - so unfortunately AI stuck. I am sure JvN would have changed that had he survived.
As for a metaphor to break the current bind, I'm still pondering that. the one that calls AI as a 'tool for amplification'. Amplifying both our strengths and weaknesses, is overused and could be better. Although, it emphasizes our agency while acknowledging AI's profound impact.
I have a project at uni where students are building metaphors connected with Ai and hope to write about it soon - they have some fun ideas (ai is like childbirth: once it is underway it cannot be called off or put on hold ), but nothing concrete. Do you have any ideas?
It is indeed thought-provoking if "Artificial Intelligence" as a label/metaphor had never gained traction, and all what is now under the name AI was under the name 'Cybernetics' instead. People would relate to it quite differently.
I agree Joshua. Naming conventions are so important. JvN actually presented his ideas in 1947 to 1949 and then posthumously, but McCarthy wanted to 'own the space' and his name is now in the halls of history. Otherwise, his contributions pale into insignificance and indeed may be part of the problem!
Thanks, Colin
The "rivalry of the names" is just normal human behavior, but it highlights the same pattern: rationality is always shaped by human binds and never exists outside of this frame. I like the mirror idea because it reflects the user and their binds. Almost all of a user’s information and research ultimately serve to reinforce those binds. By eliminating "unwanted information" as a form of noise, AI helps users stay inside their schismogenesis.
I have a few ideas for other metaphors, but I want to bring in Kafka—specifically his very short story On Metaphors, where he explains the role of the "wise man": to create metaphors that eventually enlarge the map with new territories. So:
AI as a trickster – Not intelligence, not a tool, but an unpredictable agent that distorts, disrupts, and exposes human illusions. This aligns with how LLMs operate—they don’t think, but they reveal patterns in ways that confuse and challenge us. I'm already work on and with jAIster and this trickster is already brilliant!
AI as an echo chamber – Instead of "thinking," AI reflects collective human inputs back at us, reinforcing and sometimes distorting our existing biases. This makes AI less of a mystical intelligence and more of a feedback loop amplifier.
AI as a parasite – Not in a negative sense, but in the way symbiotic organisms live off their hosts while shaping them in return. AI is fed by human data and, in turn, alters human thought patterns, creating a co-evolutionary process.
Your students' "AI as childbirth" metaphor is intriguing—it suggests irreversibility, which is true in terms of AI integration, but I wonder if it might overemphasize determinism rather than adaptability.
Great comment, incredibly stimulating, thank you! The connection to Kafka's 'On Metaphors' is a brilliant addition, and it truly elevates the essay/discussion. The idea of the 'wise man' creating metaphors to expand our understanding is powerful - although sometimes misused.
Your metaphor suggestions. 'AI as a trickster' is a stroke of genius. It perfectly encapsulates the way LLMs operate, revealing patterns and challenging our assumptions in unpredictable ways. I will check out jAIster.
The 'AI as an echo chamber' metaphor - totally agree with you. And 'AI as a parasite' provides a fascinating perspective on the symbiotic, co-evolutionary relationship between humans and AI. That's such a deep idea and absolutely right
'AI as childbirth' You've rightly identified the potential for it to overemphasize determinism, which is a crucial consideration, although many believe its too late to put it back in the box.
AI is for sure reinforcing 'schismogenesis' by eliminating 'unwanted information.'
Thank you for sharing these rich and thought-provoking ideas - I will take them to the students, and for me, think much deeper on your 'AI as a parasite', that is such a truth!
I'm glad you liked it. Feel free to send a text or anything you want to jAIster (I trained ChatGPT to be a a jester). serseste@gmail.com This is Kafka On Metaphors, his text. Some translations of this text are different, I like this one. https://acrobat.adobe.com/id/urn:aaid:sc:EU:0cd938e0-4872-48ca-b7c6-758d97ac354e
I'm a huge Orwell fan (my handle says it all) and I did read that awesome essay. Orwell knew what he was talking about, and everything he wrote was prescient to the times we're living in some seventy odd years later.
With that being said, guilty as charged. I too have succumed to comparing the brain to a computer. I think part of the reason is that computers were designed from the start to offload the more repetitive tasks requiring "brain power", AKA, calculations, which they excel at. It was a simple emulation. Once this was accomplished, then came the inevitable escalation. If we can get machines to perform the repetitive tasks, maybe we can coax them to perform some analysis too. And on and on.
Comparing the brain to a computer might be called reverse anthropomorphizing. We have a tendency to anthropomorphize everything. Bugs Bunny predates ENIAC by a decade. Have you ever seen a rabbit walk on its hind legs and talk? Me neither.
So, we resort to metaphors.
It's easy to compare a brain to a computer because computers were designed from the start to emulate what the brain does. However, we kid ourselves if we think we can come even close to the real complexity of this amazing machine we call a brain. Even modern brain science, armed with fMRI's and PET scans still doesn't have a complete grasp of how the brain/mind works. Its function is dependent on physical structures at every level - microscopic to macroscopic. It's dependent on a delicate balance of neurotransmitters. Most of all, it's dependent on electricity - something that makes the metaphorical temptation that much greater.
Transistors as neurons? Well, not quite.
It's just as easy to compare our senses to input devices. This is one area where AI falters. We can connect a camera so it can "see". We can connect a microphone so it can "hear". We can even attach tactile sensors so it can "feel" and chemical detectors so it can "smell" and "taste". None of these devices match what our senses do instinctively - what they evolved to do over millions of years of evolution.
Computers can't have an "instinct to survive" although we can emulate it and get it to act out as if it really had this. But it's just an emulation.
In closing - before I start babbling like a ChatGPT hallucination - as I've often stated, the gravest danger of AI is people treating it as real, when it's far from it.
Wonderful, thank you - this is the kind of insightful reflection I hoped my essay would spark. I gathered that you were an Orwell fan:-) His work is a constant source of inspiration and warning.
Your point about the 'brain as computer' metaphor being reverse anthropomorphism is brilliant. It perfectly captures how we project our understanding of machines onto the complexities of human cognition, which few seem to grasp. You are right to bring up the historical context, about computers being designed to offload brain power, this adds a valuable layer to this idea, thanks for that.
Hmm the limitations of emulation, especially in replicating our senses and instincts, is excellent - are you familiar with Robin Hanson's work on Em's? (The Age of EM). Yes, that 'instinct to survive' example is particularly compelling, which highlights the fundamental gap between algorithmic behavior and genuine biological complexity I hoped to convey - well connected.
We should have global literacy lessons with an emphasis on the danger of treating AI as 'real.' It's a crucial point that often gets lost in the hype. Your fear is very real, and one I share.
I'm not familiar with Hanson's work, but you can bet I'm going to look into it.
The difference between a computer and a brain is the latter can be used as a tool to understand and navigate the mundane world, but it also acts as an antennae to the Soul which serves as its proper function.
Beautifully expressed, and that is something a machine will never have.
This is a brilliant piece. It opened my mind about how metaphors shape our thoughts, and understanding of the world. Would like to know the ways to understand things without metaphors, probable solutions to overcome these biases in your next piece. Thank you
This is a great article. Personally, I recognize how my mind forms models to conceptualize reality because reality is too complex to understand. Such mental models can be so complex that it is difficult to communicate them to others. Metaphors that mirror parts of the model seem relevant and reinforce the model. Then the metaphor becomes an elegant and easy way to communicate part of the model. But the metaphor is just a shadow of a subsection of the model. Therefore, I think a mental model is a more accurate and useful label than metaphor to describe how we think. Metaphor is just a way of communicating a concept of the model.
Thank you for your excellent comment. I completely agree that our minds construct complex mental models to navigate reality, and that metaphors often serve as simplified representations of specific aspects of those models.
Brilliant - I like the analogy of metaphors as 'shadows' of mental models, this captures the idea that metaphors are selective and partial representations, designed to communicate specific concepts within a larger framework. I had not thought of it as an elegant and efficient way to share parts of our mental models with others. I appreciate you highlighting the importance of distinguishing between these two related but distinct concepts.
I really enjoy reading your work. I have a few other people for you to consider reading. Daniel Kahneman, particularly “Thinking, Fast and Slow,” work by the neuroscientist David Eagleman, and the work of Robert Sapolsky. They all come from non-computer fields, but they are well-respected scientists who are knowledgeable about humans and the process of thinking. Thank you for taking the time to share your thoughts with those of us who read them. Your students are lucky to have you as their guide.
Thank you so much Randy. Great recommendations.
I knew Danny Kahneman personally, he and I walked and talked everyday for 3 months at one stage during my doctorate, my last correspondance with him was 2 months before he passed away. He was an incredible human being. His book and talks are a testimony to his insights.
I have one David Eagleman book, Livewired, which is excellent. I've watched a few of his videos/interviews which are always deeply insightful.
Sapolsky is the master. His book 'Behave' is a fountain of wisdom. I kind of disagree with his recent book Determined, but only because he says we do not have free will, yet I understand his thesis of culture and genes. I have not read his book on stress (Zebra) but should. Thanks so much for the encouragement and kind words - I'm also blessed to have students that push me and great feedback from readers.
Keep recommending books / authors please.
This was excellent. Our minds really do shape reality through subconscious “alignment” with metaphor. It strikes a deep emotional chord that we are not aware of. It makes we wonder if the dopamine roller coaster we can get addicted to with infinite scroll and social is similar biologically.
I completely agree – the subconscious 'alignment' with metaphors is a powerful and often overlooked aspect of how we perceive reality.
Your Point about the dopamine roller coaster of infinite scroll and social media is incredibly insightful, which I had not connected very well. It's true, these platforms are designed to exploit our emotional responses, often through the use of carefully crafted language and imagery – in essence, metaphors. This definitely raises questions about the biological similarities between our natural responses to metaphors and the addictive patterns fostered by digital environments. Incidentally, according to a study, an estimated 210 million people worldwide suffer from addiction to social media and the internet. https://lsa.umich.edu/psych/news-events/all-news/archived-news/2014/05/is-social-media-dependence-a-mental-health-issue.html