24 Comments
User's avatar
Dorette Kriel's avatar

Brilliant article, thank you for sharing! Something I've been thinking about a lot recently is how the difference between machine learning and human understanding is experience vs data input. As a physical therapist working in a town where many people who were truly successful in life come to retire and have daily conversations with these 70+-year-olds who are playing golf and strolling on the beach. I find it a little sad that no one will have access to this resource of experential knowledge because not their equally successful friends and collegues, or their children - who are often more excited and invested in the trustfunds they will inherit instead of learning actual skills, are asking them questions. The more we rely on Ai to answer our questions instead of asking real people about their lived experiences the less we will know and their knowledge and wisdom will die with them.

Expand full comment
The One Percent Rule's avatar

Thank you so much for your kind words and for sharing such a relevant and insightful perspective!

Your distinction between 'experience vs data input' is such a powerful way to frame the difference between human understanding and machine learning. It cuts right to the heart of what I sought to explore, that lived, embodied experience carries a kind of depth and meaning (semantics, intentionality) that mere data processing (syntax) doesn't capture.

You've put your finger on something crucial, if we increasingly turn to AI for answers, neglecting the conversations and questions we should be having with people who have actually lived through diverse situations, we risk profound impoverishment.

"Knowledge and wisdom" isn't just data, it's nuanced understanding forged through context, failure, adaptation, and emotion, precisely the things AI doesn't 'experience'. Losing that, as you say, means it dies with them, and we are all the poorer for it.

This is a strong reminder of a real-world erosion of how we learn, connect, and value human wisdom. Thank you again for sharing that powerful insight.

Expand full comment
Marginal Gains's avatar

Interesting post!

I also do not believe carbon has much to do with thinking. Over the course of 4.5 billion years, Earth transitioned from having only non-living atoms to atom-based life/consciousness. I think a series of reactions triggered this leap. Unless we can replicate this process in a lab, which some researchers are attempting, we may never fully understand how non-living matter became living and conscious.

That said, I am convinced there is life elsewhere in the universe that is not carbon-based and is capable of thinking. Furthermore, I am unwilling to believe that we have discovered everything on Earth that is “alive” and capable of thought. There could very well be forms of life or consciousness on this planet that defy our current definitions and perhaps are not carbon-based at all.

Regarding AI, I do not think the current building methods will lead to an honestly thinking machine. Machines may become more powerful and capable, but they will remain tools—tools that excel at augmenting human limitations, like computation, memory, and the speed at which we process and retrieve information.

This raises an important question: Are we too focused on replicating human-style thinking rather than creating tools that enhance and expand our thinking ability? True human thinking is messy, creative, and deeply tied to emotion and context. Machines don’t need to mimic this—they need to complement it.

I’ll end with a quote from an article I read in the Washington Post about AI today:

> "Okay, one day, even further into the future, massive investment might have turned AI into a soulful something with needs of its own, and we can fulfill ourselves by meeting them. Should that happen, Wright will offer her congratulations: ‘You’ve spent billions of dollars and countless hours to create something monkeys evolved into for free.’”

The above resonates with me. Instead of trying to recreate what nature has already done so brilliantly, maybe we should focus on building tools that amplify our unique strengths as humans—our creativity, empathy, and capacity for understanding. Machines should push us toward deeper understanding, not lure us into mistaking simulation for thought.

Expand full comment
The One Percent Rule's avatar

Good point about the substrate, whether carbon or silicon, isn't the fundamental issue. Your point about the immense, still mysterious leap from non-living atoms to conscious life on Earth really underscores why simply building complex machines might not automatically bridge that gap. Acknowledging that profound transition highlights the difference between intricate processing and the emergence of genuine awareness.

Your position is wise to have an openness to possibilities beyond our current understanding, whether that's non-carbon life elsewhere or even forms of consciousness here on Earth that don't fit our existing definitions ( akin to the phylogenetic staircase). It’s a healthy perspective that avoids prematurely closing doors on what 'thinking' or 'being alive' might entail, aligning with the idea that future machines might think, but only if they possess the necessary causal powers, whatever their substrate.

You raise a absolutely crucial question: Should the goal be replicating human-style thought, or creating tools that augment our own unique abilities? Your description of human thinking, messy, creative, tied to emotion and context, perfectly captures what current AI lacks and what I wrote. The idea that machines should complement rather than mimic these qualities feels very true. Their strengths lie in areas where we are limited (computation, memory recall speed), and perhaps their best use is precisely in freeing up human capacity for the things machines can't do.

That Washington Post quote is fantastic, it puts the whole endeavor into perspective with sharp humor! It really drives home your final point, which strongly matches with my conclusion: the focus should perhaps be less on recreating consciousness in a box, and more on developing technologies that amplify human creativity, empathy, and understanding. The danger isn't just building machines that can't think like us, but losing sight of the value and nature of our own thinking in the process. Using AI to push us toward deeper understanding, rather than settling for impressive simulation, seems like a much more worthwhile goal.

Thank you once again for adding to my thoughts

Expand full comment
Toolste's avatar

very well said. we forget this at our peril

Expand full comment
The One Percent Rule's avatar

Thank you - absolutely

Expand full comment
WinstonSmithLondonOceania's avatar

I believe that minds as software running on the brains biological hardware can be a useful metaphor - but nothing more than that. I also believe it goes even beyond mere semantics. It reminds me of studies of animal intelligence. Squirrels and certain birds, in particular parrots, corvids and even starlings - especially the minah bird. YouTube has dozens of videos of these creatures performing amazing tricks, solving puzzles, and of course, in the case of all three avian types mentioned above, speaking. Not to mention what dogs and cats can do, and of course, the other apes.

Pertaining to the causal powers of biological systems, it strikes me that a brain isolated from a physical body - be it human or other critter - would have difficulty distinguishing its surroundings. Our brains are trained on our senses - what you might call our input devices.

It's possible to attach various sensors to a computer to provide the inputs, but it still won't have the same responses. Even with some form of tactile sensors and heat sensors, a computer is totally incapable of feeling pain. It might register a signal, but it doesn't feel. Attaching a camera so it can see, then flashing a laser at the camera, which might damage the light sensor, but the computer won't feel blinded. It won't know the difference. Attach a microphone so it can "hear", then reproduce a noise at 120 dB. It will have no effect on the machine.

Connections formed between our minds and the outside world provide the context that becomes semantics for us. If I say "red", anyone would grasp what that means. We can program a computer to associate the word "red" with the frequency range 400 - 480, but it still has no meaning to the machine. We also learn not to touch a hot stove because we associate "burn" with "pain" which in turn we associate with "fear". Ditto the sight of a large, hungry saber toothed cat eyeing us and drooling. This is something no computer will ever experience.

Our "intelligence" evolved over millions upon millions of years - in part as a survival mechanism. This was a need built in to the earliest strands of proteins that became RNA, then DNA, then protoplasm, which grew into ever more sophisticated life forms - all because it benefited the instinctive impulse to survive. Again something a computer can't experience.

"If we forget, it will not be because machines fooled us. It will be because we preferred the comfort of mimicry to the burden of thinking and understanding."

We fooled ourselves.

Expand full comment
The One Percent Rule's avatar

The examples you gave, the inability of a machine to feel pain from heat, blindness from a laser, or the physical impact of deafening noise, powerfully illustrate the gap between registering a signal and having a subjective experience (qualia). This lack of genuine feeling, of lived consequence, is perhaps the most fundamental difference between current AI processing and human (or even animal) consciousness.

And you've tied this perfectly to how semantics arises for us: through these embodied experiences, associations (like "burn" with "pain" and "fear"), and the context provided by our senses and evolutionary history. The meaning of "red" or the danger of a predator isn't just a data point; it's interwoven with perception, emotion, and survival – a richness that programmed associations in a computer simply don't possess. Your point about intelligence evolving as a survival mechanism, driven by an innate impulse, further highlights this deep biological grounding that machines lack.

Those YouTube videos certainly showcase remarkable abilities in various creatures, highlighting that intelligence isn't a single monolithic thing and exists on a spectrum. It forces us to consider what we mean by intelligence and how we recognize it, even when it differs vastly from our own, a useful parallel when thinking about AI.

Our willingness to attribute intelligence often correlates with biological complexity or relatedness (the phylogenetic staircase powerfully illustrates this sliding scale).

We fooled ourselves. That really captures the essence of the warning.

Thank you again for adding so much depth and drawing these vital connections between embodiment, subjective experience, evolution, and the limits of the computational metaphor.

Expand full comment
WinstonSmithLondonOceania's avatar

What scares me is the extent to which people continue to have such difficulty distinguishing between artificial intelligence and the real thing. Alan Turing, genius though he was, didn't have it quite right with his famous test.

Especially disturbing is the push for mechanical autonomy - in self-driving cars and trucks - even more so in heavily armed, fully autonomous military robots. That's downright terrifying. Could Skynet be next? It's not out of the question.

Expand full comment
The One Percent Rule's avatar

You are absolutely right. Perhaps the real challenge isn't just distinguishing AI from 'real' intelligence, or rather 'thinking' as Turing explored, but ensuring robust human control and ethical safeguards as autonomous capabilities expand, particularly in high-risk domains.

Expand full comment
WinstonSmithLondonOceania's avatar

Asimov had the right idea with the three laws of robotics, later to become 4 when R Daneel Olivaw and R. Giskard Reventlov conceived of the "zeroth" law.

Expand full comment
The One Percent Rule's avatar

Yes, that is a good conception. One of my colleagues, now retired, drew up a list of different ethics lists a while ago - and it has grown considereably since - worth a look - https://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html

Expand full comment
WinstonSmithLondonOceania's avatar

Some great ideas. Now the tricky part is how to infuse these into the algorithms. There will need to be methods for internal conflict resolution.

Expand full comment
Stefan's avatar

Gregory Bateson said that information is “a difference that makes a difference.”

We can find the information that AI provides us in books, articles, lectures—so from that perspective, AI makes no difference at all. It’s not the data that matters. What matters is whether it becomes a difference that makes a difference for the learner.

And that only happens in teaching—when a student actually absorbs the knowledge.

Not many teachers are capable of making that difference. But I believe AI, with the right training, can do it.

Why? Because AI can mimic a kind of subjectness—a way of interacting with pupils that creates engagement, and more importantly, trust. And when that trust is there, when the student starts responding, then a transformation can happen. That’s the point where information becomes meaningful.

This, I believe, is why your students call AI “he” or “she.”

There is some trust in that interaction.

It’s not about whether the machine understands.

It’s about whether it helps us understand.

And that, perhaps, is the only difference that matters.

Expand full comment
The One Percent Rule's avatar

Brilliant, "a difference that makes a difference" is such a useful lens for thinking about information and its impact.

You raise a really crucial point about shifting the focus from the raw data AI provides (which, as you note, often exists elsewhere) to its effect on the learner. The idea that information only truly matters when it becomes absorbed knowledge, when it "makes a difference" to that individual, is fundamental to education.

The suggestion that AI might succeed where some human teachers fail, specifically by "mimicking a kind of subjectness" to build engagement and trust, is interesting. If that interaction leads to genuine transformation and understanding in the student, then the AI is certainly serving a valuable function. Your explanation for why my students might use personalaization an gender, as seeing it as a sign of trust emerging from that interaction, makes intuitive sense in this context.

I like the distinct perspective you make against my essay's main thrust: "It’s not about whether the machine understands. It’s about whether it helps us understand." This is a powerful, pragmatic viewpoint. It prioritizes the outcome for the human user over the ontological status of the machine's internal processes. From this angle, if AI effectively facilitates human learning and understanding, then perhaps its own lack of genuine comprehension is secondary.

My concern, of course, comes from a different angle, questioning the nature of the machine's process and warning against mistaking sophisticated mimicry (even if it builds trust or aids learning) for genuine thought or understanding, precisely because that confusion might ultimately lead us to devalue what real understanding entails.

However, your focus on the human side of the interaction, on whether AI can be a tool that makes a meaningful difference in our own learning and comprehension, is a vital and important perspective in this whole discussion. It highlights the potential instrumental value of these tools, regardless of their internal state. This is a point I reiterate over and over again to students.

Thank you for sharing that valuable viewpoint!

Expand full comment
Stefan's avatar

Talking about trust in AI is already reinforcing the "he" or "she" personalisation. Because we already moved from "it", the dictionary, which is common sense to trust. "He" or "she" can or cannot be trusted, "it" has not such a quality. It's the user of the hammer smashing his fingers not the hammer. Lol!

Expand full comment
Veronika Bond's avatar

Thank you Colin! I think the key sentence here is "Artificial intelligence, despite its statistical agility, does not engage with meaning."

Looking at the definitions of the words 'artificial' (= man-made, not natural, fake) and 'intelligence' (here used in the sense of = the ability to perform computer functions) it instantly becomes clear that the term 'artificial intelligence' is originally a metaphor for a 'man-made mental skill'. Anything 'artificial' is unsustainable because it has no life of its own, or is inauthentic (such as 'artificial respiration' or an 'artificial smile').

I think this is where the important recognition comes in that AI "does not engage with meaning." Engaging with meaning is an essential aspect of any living process. Not engaging with meaning implies that AI is dead. It lives and survives fuelled by anthropocentric human desire to control nature ›› based on the confusion between natural intelligence and some artefactual version of it ›› heavily supported by the linguistic confusion between metaphor and literal meaning. In other words, the mistaken meaning in language directly reflects the missing meaning in the behaviour of computer activity.

Expand full comment
The One Percent Rule's avatar

You're absolutely right to highlight that sentence, "Artificial intelligence, despite its statistical agility, does not engage with meaning", as central. I think your breakdown of the words 'artificial' and 'intelligence' gets right to the heart of the matter. Pointing out that the term began as, essentially, a metaphor for 'man-made mental skill' is crucial. You highlight how the very language we use might predispose us to the kind of category mistake I outline, confusing the simulation (the 'artifactual version') with the real thing.

That link you draw, suggesting engagement with meaning is fundamental to living processes, and therefore its absence implies AI is certainly non-living, yet, people are beginning to 'believe' it is!

Your point about this confusion being fueled by a desire to control nature, supported by the linguistic slip between metaphor and literal meaning, is so true. It connects well with the idea I make that people find comfort in viewing minds as software, perhaps that desire for control and predictability is part of that 'aesthetic comfort'. And I completely agree that the language we use shapes our understanding profoundly; your final sentence beautifully captures how the 'mistaken meaning in language' mirrors the 'missing meaning in the behaviour' itself.

Thank you again for adding such valuable layers to the discussion It’s given me more to think about!

Expand full comment
Veronika Bond's avatar

If you don't know it yet, you might be interested in the work of Jeremy Lent ('The Patterning Instinct' and 'The Web of Meaning')

Expand full comment
The One Percent Rule's avatar

I had no idea, gosh Patterning Instinct - "Pioneering the new field of cognitive history, this book will show how different cultures construct core metaphors to make meaning out of their world and how these metaphors forge the values that ultimately drive people’s actions."

Expand full comment
Joshua Bond's avatar

It seems to me that the whole world of 'subjective experience' which humans naturally have, will forever be out of reach for AI. Mimicry? Yes. Able to get humans to relate to it emotionally? Yes. And because of biomimicry, able to give a certain amount of 'comfort' to humans? Yes. But in an of itself, AI cannot experience human experiences.

Expand full comment
MXTM (a.k.a.: vjtsu)'s avatar

I refer to it.

Expand full comment