Thank you for this very important post. It captured much more eloquently many aspects of AI I have been worried about. At times I think about this in terms of species of personality/character, that is, given that we seem to need more Ed Dijkstra-types in the world today, is his type an endangered species? Would types like him be able to flourish in “ecosystems” beyond a small and irrelevant circle of friends. I worry that in a world infected by a “virus” of AI dependency he would either go unnoticed or seem positively crazy and threatening, like the philosopher in Plato’s Cave. Thanks again! I’m going to share this widely.
Thank you Norman. I would certainly like to see more Ed Dijkstra-types raising awareness and working on solutions to improve cognition with AI, rather than the gung-ho mentality of building systems capable of performing all economic tasks, as OpenAI mission states!
Very nicely put about Plato's Cave. This is a problem indeed, how to build a wider ecosystem when you are inclined to isolate... Geoff Hinton has put his head over the parapet, but he is focused on AI destroying humanity, whilst he would probably be better suited, through his large platform, to communicating about the downside of AI on thinking.
I'm very grateful for your sharing. This is a hugely important matter.
This is the first article I've read of yours, and it's both well written and, well, informative and entertaining. I will certainly invest time reading your other articles. So, thank you!
I have, however, doubts about your concerns on AI. Whenever someone says to me: "You know, I was thinking--" I interrupt by saying, jokingly: "It hurts, doesn't it, thinking."
I'm sorry to say - most people are either incapable of critical thinking, or they are just fine letting others do their thinking for them. Case in point - a good third of Americans is now blindly following a man who is, by any critical measure, clearly deranged.
Which raises the question - for as far as AI manipulating public opinion is concerned, I think the manipulating is far less worrisome than the direction in which that opinion is manipulated. If only a majority of people were taught by an AI the current, lingering interpretation of democratic values, what, the now rapidly disappearing understanding of Democracy itself, we'd not be in the trouble we're in now.
It is, in my opinion, not so much AI manipulating public opinion that should trouble us, but the pervasiveness of aberrate median public thinking with which AIs are trained that should trouble us, which is another way of saying - AIs are merely a reflection of human thought at any given time before they enforce that thought by regurgitating it.
To say it in computer terms - garbage in, garbage out.
It is, in my opinion, far more critical we find ways to protect AI models from incorporating 'misinformation' than protect public opinion against being influenced by AI. I don't know how, but the companies building AIs should be forced to provide insight in the data they train their AIs with as well as train their AIs with a baseline of objectively 'true' information.
If we can somehow accomplish that, AI could actually function as a bulwark against the dangers of someone like Trump, the same way newspapers (under the condition, I must say, the same condition, that journalists try to provide objective news coverage) started protecting us by keeping those in power honest.
Finally, talking about the subset of humanity who can think critically - there too I don't share your concerns, simply because critical thinking is, for those who are capable of it, and enjoy it, a goal all onto itself. For those of us who like to think critically, it is a need to do so, not a burden that we would rather outsource to an AI, any more so than to other human beings.
Good lawyers will find new interpretations of the law to win cases, regardless of what an AI advises. Writers will write books, politicians will find original policy solutions, scientists will open up new worlds - because it is their nature to do so, regardless of AIs.
And for the same reason - AIs being a reflection of the current state rather than the creators of it - humans, in my opinion, will be better at original thought than AIs, at least as long as AIs need to be trained with human thought.
AIs will, at least for the foreseable future, always be one step behind...
Thank you, I truly appreciate you taking the time to engage with my article in such a meaningful way. The decline in critical thinking (have we ever had a large % of people who truly engage critically?) should be a concern for society.
You have raised several crucial points, and I agree that the 'garbage in, garbage out' principle is absolutely fundamental to the AI debate. My concerns about AI manipulation stem less from the fact of influence and more from the content of that influence, as you rightly pointed out.
You are right that those who genuinely enjoy and excel at critical thought are unlikely to relinquish it to AI. It's a fundamental drive, not just a task. I agree that AI can be a tool, and will not replace the human drive for creativity and innovation.
I'm particularly interested in your point about AI being a reflection of current thought and always being a step behind (well said). That's a fascinating perspective, and it raises questions about the nature of true originality and the potential for AI to transcend its training data, as in the case of move 37 in AlphaGo's championship game. AlphaFold, however, found patterns in the data to discover new proteins. This is not 'original thought' per se. So we have some time to stay ahead.
You have prompted me to dust of the layers on an essay I have been cogitating a while on those that think critically. Thank you
Thank you for your extensive reply! In answer to your question...
Go has strict rules, and the chemistry behind proteins and their workings is hard science. Even if an AI makes an 'original' Go move, or predicts the workings of a new protein, that move and prediction are one hundred percent within the bounds of both Go and chemistry, and so they are the result of the training set.
Human behavior, both as individuals and as groups, is, drilled down to its essence, driven by human morality, which is incredibly divers and unbound - to some extent the opposite of static (unfortunately?).
The morality of the Germans under Nazi rule was completely different than the morality of the Germans living in the Weimar Republic just a few years before. My morality differs from yours, however subtly, and from anyone else walking this Earth. The morality of Indians differs from that of Americans. My own morality now differs from when I was twenty. You get my drift.
There is something inherently chaotic about human morality (or religion, or ideology for that matter), and if not that, the range of what people consider 'good' or 'bad' is incredibly varied and changeable. The outer bounds are - scaringly - far apart.
If an AI wants to predict things about humans and human societies - i.e. wants to come up with an original thought, or, more specifically to the question, before any of us humans do - it first has to learn human morality, and more specifically, be able to notice change in that morality (before we do).
I don't see how we can train an AI to accomplish that, at least not in the foreseeable future.
So, two children are playing, and one of them takes away the toy of the other. Then the first child takes it back, gently, and without touching the other child. Or the first child takes it back and gives the second a slap on the wrist. Or the first child shoves the other child aside while taking back the toy. Or the first child starts crying and behaving in a way so that the other child returns the toy.
Some of us find all reactions of both children acceptable. Others find one or more reactions unacceptable. Yet, by correcting or encouraging certain behavior, we, on the whole, raise children to adulthood - we make moral beings out of our children.
I don't see how we can train an AI other than by raising them to adulthood in more or less the same way, and I don't see how AIs can create an original thought about our society before we do without such schooling...
Thank you again for this incredibly insightful follow-up! You've articulated a crucial distinction between the 'hard' science domains like Go and chemistry, and the fluid, ever-shifting landscape of human morality. Thank you for that, you are right, whilst the developers of AlphaGo likened the move to intuition, it is more probable that intuition in this case was hard rules and experience (which I suspect Herb Simon and Danny Kahneman would have agreed with)
Your point about the inherent randomness and variability of human morality is true. The example of the shifting morality in Germany between the Weimar Republic and Nazi rule powerfully illustrates this, it reminds me of the book
Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland by Christopher Browning. The idea that AI would need to 'learn' and predict these changes, essentially becoming a student of human moral evolution, is a fascinating and challenging concept - I know the labs work on this, but then you get Anthropic claiming they are "growing a brain" and do not understand how it is evolving - so the challenge will come how to control what you do not understand!
I completely agree that the nuances of human interaction, like the children's toy example, are incredibly difficult to codify. Our responses to such situations are shaped by a complex interplay of cultural norms, personal experiences, and individual temperament. The idea of 'raising' an AI to understand these nuances, as you put it, is a compelling analogy, and I know people like Gary Marcus and Yann LeCunn promote this idea, maybe Ilya Sutskeyver will build this?
That is a fundamental limitation of AI, its reliance on past data in a domain where the rules are constantly changing, but then we also have a lot of synthetic data being developed, I wonder where that takes us. True, for now whilst AI can excel at pattern recognition within established systems, it struggles with the unpredictable and subjective nature of human morality and indeed sometimes even context as I wrote about my groups experiments yesterday.
This raises a profound question: can AI ever truly understand or predict human behavior in its entirety, or will it always be limited by its inability to grasp the complexities of our moral compass? This week I read a comment 'I'm more concerned about human morality, than AI morality." which is a fair reflection, nevertheless we must do more to solidifying AI morality.
You too - thank you for the exciting follow up! Fascinating stuff, isn't it?
In that context - I'm particularly interested in the progression of predictability into chaos.
We talked about chemistry and AI finding new proteins with certain properties from scratch. We have nailed down, scientifically, in laws of physics, laws of chemistry and advanced mathematics how atoms and molecules behave under certain physical conditions, or how some molecules behave when combined in quaternair protein structures. We can feed such info to an AI, and then that AI can predict new proteins with certain properties. But that is solely because atoms and molecules will always behave that exact same way under the same physical conditions. They are bound by strict scientific rules. All biochemistry - life - depends on that predictability.
And then there is the Brownian Motion. We put water in a bowl and add a tiny particle of grain to it. We're still talking molecules, simple ones, actually, water molecules, but the movement of that grain particle is chaotic, and cannot be predicted, not by us, nor by any current or future AI, not in a trillion years. It is mathematically impossible to predict the movement of that particle.
Somehow a new property has been added to the system, one that is inherently unpredictable.
We know fairly well how clouds form, and storms, and why it starts raining. But it is mathematically impossible to predict the weather with any kind of certainty beyond sixty percent a week from now - and we will never, not in a million years be able to do that.
Chaos has been added to the system.
You know, one of most awesome experiences I've ever had was when one of my biology professors entered the classroom with a bleak, ugly looking plant in one hand, and an exquisite, beautifully formed colored one in the other. Then he started talking about the exquisite one - that it was perfectly adapted to its environment, was supremely efficient at photosynthesis because it lived in a biotope with continual mist, and was pollinated by a series of specialized species of bees. The other plant was rather ordinary, wasn't particularly good at photosynthesis, could survive in dry and wet biotopes, and could be pollinated by a score of insects. Then he asked which of the two was still around. The latter, of course.
The better any species is adapted to its environment, the shorter its survival on this our planet, because it can take one million years, or twenty million, but environments change and eventually disappear. Always.
The Earth as a biological system is inherently chaotic.
I think that the human mind, and human culture, have an attribute of chaos to them, because it is chaos that is the enemy of specialization, and specialization leads to extinction.
This raises the question, not whether an AI will ever be better at predicting the outcome of human endeavors, but whether making predictions is at all possible beyond a certain time horizon.
I think it is not.
It is my strong conviction that no one could have foreseen that a market seller, frustrated by, and angry about, having to pay corrupt police officers, would set himself on fire and thereby instigate the Arabic Spring. At most, people might have felt that things were brewing, or about to explode sometime in the future. But that self-burning was, in a way, an unforeseeable, freak event. Random. Chaotic in nature.
I think that AIs will fall victim to the same law of nature that rules us all - at a certain point of complexity, chaos is added to the system (human culture), making that predictions can no longer be made with any degree of certainty.
This notion is, especially for computer coders, difficult to live with, because computers are defined by the fact that they will always give the same outcome when running the same code. I mean, that is their very function.
An AI, trained on a dataset that is complex enough to have attained the attribute of chaos, might not give the same answer two days in a row.
I think the future of AI lies not in single, independent AIs, trained with ever more complete and complex datasets, on ever more powerful computers (quantum computers), but in combinations of AIs trained on the same datasets, running parallel and communicating amongst themselves, to be able to ignore the outliers when considering a certain problem, or, to neutralize the one AI that 'just isn't right in the head.'
They might be able to predict with a fair amount of certainty the number of traffic casualties when car producers are forced to implement certain safety measures. But then they will probably be completely wrong, because humans, knowing the car is safer, will start driving twice as fast...
You've raised some truly fascinating points. Your examples of Brownian motion and weather patterns perfectly illustrate the inherent limitations of predictability in complex systems. The biology professor's example is a powerful analogy, and you are right the Arab Spring example is very telling.
I have not thought enough about AI encountering the same limitations as humans due to the introduction of chaos in complex datasets, 'coders mind on this,' but it is a crucial consideration.
I wonder how the ensemble methods to get probabilistic forecasts of DeepMind's weather forecasting will play out long term, yet, I sense you are right on the accuracy and consistency. Self-driving cars have to deal with unpredictable road conditions, other drivers, and weather. AI is crucial for robust perception and real-time adaptation, yet it is not tried and tested in many diverse weather areas. I know that DeepMind are working towards building AI that can handle uncertainty and adapt to change. Your point of "AIs trained on the same datasets, running parallel and communicating amongst themselves" is probably the most plausible.
Gleick's book on Chaos is a favorite, I must dust it off.
I've got a personal experience with AI and critical thinking that might interest you. I wanted to practice thinking about international relations using Bayesian reasoning. I began working with ChatGPT to do so.
At first ChatGPT gave me all the answers. Not very helpful, since I didn't already have the training to keep up with all it was saying, nor to question its output. But then I began to have it guide me through exercises one question at a time. This was much better. I had to think through the problems step by step, and on more than a few occasions I ended up in long back-and-forths about the qualities of evidence and so forth.
These back and forths were like the ones I have with my students -- spontaneous, driven by skepticism or curiosity about a claim. The only time I found the AI could not help but stumble over itself was when I got into a debate with it about the labour theory of value...
Anyway, I at least feel like I've gotten a lot out of it. Now I regularly think like a Bayesian when reading the news and so forth. I am also having fun applying it to historical counterfactuals, too.
In short, I agree with your article -- thinking people will need to wary of trusting the machine. That said, my experience suggests there are already some pretty simple ways to use it that can help. Asking it to slow down and to proceed with the aim of having me participate in thinking with it has made my using the service much much better.
PS: Good way to design a workout routine. Tell ChatGPT your goals, then have it interview you, one question at a time, about your current fitness and time constraints. I've been using the workout regimen it designed for me for over a month now, and it is great.
Your example is excellent and exactly the way that we should be using LLMs. And absolutely, we should be teaching this method to students too - it prevents laziness and atrophy. I know some education LLMs are working toward this and you can set up a GPT for usage this way too.
What a terrific outcome to think like a Bayesian.
I should try your tip on fitness, I have triathlon goals for June and just track them via Garmin, but having an accountability partner would be very useful.
Thank you for this very important post. It captured much more eloquently many aspects of AI I have been worried about. At times I think about this in terms of species of personality/character, that is, given that we seem to need more Ed Dijkstra-types in the world today, is his type an endangered species? Would types like him be able to flourish in “ecosystems” beyond a small and irrelevant circle of friends. I worry that in a world infected by a “virus” of AI dependency he would either go unnoticed or seem positively crazy and threatening, like the philosopher in Plato’s Cave. Thanks again! I’m going to share this widely.
Thank you Norman. I would certainly like to see more Ed Dijkstra-types raising awareness and working on solutions to improve cognition with AI, rather than the gung-ho mentality of building systems capable of performing all economic tasks, as OpenAI mission states!
Very nicely put about Plato's Cave. This is a problem indeed, how to build a wider ecosystem when you are inclined to isolate... Geoff Hinton has put his head over the parapet, but he is focused on AI destroying humanity, whilst he would probably be better suited, through his large platform, to communicating about the downside of AI on thinking.
I'm very grateful for your sharing. This is a hugely important matter.
This is the first article I've read of yours, and it's both well written and, well, informative and entertaining. I will certainly invest time reading your other articles. So, thank you!
I have, however, doubts about your concerns on AI. Whenever someone says to me: "You know, I was thinking--" I interrupt by saying, jokingly: "It hurts, doesn't it, thinking."
I'm sorry to say - most people are either incapable of critical thinking, or they are just fine letting others do their thinking for them. Case in point - a good third of Americans is now blindly following a man who is, by any critical measure, clearly deranged.
Which raises the question - for as far as AI manipulating public opinion is concerned, I think the manipulating is far less worrisome than the direction in which that opinion is manipulated. If only a majority of people were taught by an AI the current, lingering interpretation of democratic values, what, the now rapidly disappearing understanding of Democracy itself, we'd not be in the trouble we're in now.
It is, in my opinion, not so much AI manipulating public opinion that should trouble us, but the pervasiveness of aberrate median public thinking with which AIs are trained that should trouble us, which is another way of saying - AIs are merely a reflection of human thought at any given time before they enforce that thought by regurgitating it.
To say it in computer terms - garbage in, garbage out.
It is, in my opinion, far more critical we find ways to protect AI models from incorporating 'misinformation' than protect public opinion against being influenced by AI. I don't know how, but the companies building AIs should be forced to provide insight in the data they train their AIs with as well as train their AIs with a baseline of objectively 'true' information.
If we can somehow accomplish that, AI could actually function as a bulwark against the dangers of someone like Trump, the same way newspapers (under the condition, I must say, the same condition, that journalists try to provide objective news coverage) started protecting us by keeping those in power honest.
Finally, talking about the subset of humanity who can think critically - there too I don't share your concerns, simply because critical thinking is, for those who are capable of it, and enjoy it, a goal all onto itself. For those of us who like to think critically, it is a need to do so, not a burden that we would rather outsource to an AI, any more so than to other human beings.
Good lawyers will find new interpretations of the law to win cases, regardless of what an AI advises. Writers will write books, politicians will find original policy solutions, scientists will open up new worlds - because it is their nature to do so, regardless of AIs.
And for the same reason - AIs being a reflection of the current state rather than the creators of it - humans, in my opinion, will be better at original thought than AIs, at least as long as AIs need to be trained with human thought.
AIs will, at least for the foreseable future, always be one step behind...
Thank you, I truly appreciate you taking the time to engage with my article in such a meaningful way. The decline in critical thinking (have we ever had a large % of people who truly engage critically?) should be a concern for society.
You have raised several crucial points, and I agree that the 'garbage in, garbage out' principle is absolutely fundamental to the AI debate. My concerns about AI manipulation stem less from the fact of influence and more from the content of that influence, as you rightly pointed out.
There are many good papers on how people 'think' with respect to politics and they are quite disturbing, as I say above few engage critically. I have several posts on stupidity, a derogative term but showing this point, especially the work of Dietrich Bonhoeffer and Carlo Cipolla. https://onepercentrule.substack.com/p/stupidity-our-biggest-threat?utm_source=publication-search
You are right that those who genuinely enjoy and excel at critical thought are unlikely to relinquish it to AI. It's a fundamental drive, not just a task. I agree that AI can be a tool, and will not replace the human drive for creativity and innovation.
I'm particularly interested in your point about AI being a reflection of current thought and always being a step behind (well said). That's a fascinating perspective, and it raises questions about the nature of true originality and the potential for AI to transcend its training data, as in the case of move 37 in AlphaGo's championship game. AlphaFold, however, found patterns in the data to discover new proteins. This is not 'original thought' per se. So we have some time to stay ahead.
You have prompted me to dust of the layers on an essay I have been cogitating a while on those that think critically. Thank you
Thank you for your extensive reply! In answer to your question...
Go has strict rules, and the chemistry behind proteins and their workings is hard science. Even if an AI makes an 'original' Go move, or predicts the workings of a new protein, that move and prediction are one hundred percent within the bounds of both Go and chemistry, and so they are the result of the training set.
Human behavior, both as individuals and as groups, is, drilled down to its essence, driven by human morality, which is incredibly divers and unbound - to some extent the opposite of static (unfortunately?).
The morality of the Germans under Nazi rule was completely different than the morality of the Germans living in the Weimar Republic just a few years before. My morality differs from yours, however subtly, and from anyone else walking this Earth. The morality of Indians differs from that of Americans. My own morality now differs from when I was twenty. You get my drift.
There is something inherently chaotic about human morality (or religion, or ideology for that matter), and if not that, the range of what people consider 'good' or 'bad' is incredibly varied and changeable. The outer bounds are - scaringly - far apart.
If an AI wants to predict things about humans and human societies - i.e. wants to come up with an original thought, or, more specifically to the question, before any of us humans do - it first has to learn human morality, and more specifically, be able to notice change in that morality (before we do).
I don't see how we can train an AI to accomplish that, at least not in the foreseeable future.
So, two children are playing, and one of them takes away the toy of the other. Then the first child takes it back, gently, and without touching the other child. Or the first child takes it back and gives the second a slap on the wrist. Or the first child shoves the other child aside while taking back the toy. Or the first child starts crying and behaving in a way so that the other child returns the toy.
Some of us find all reactions of both children acceptable. Others find one or more reactions unacceptable. Yet, by correcting or encouraging certain behavior, we, on the whole, raise children to adulthood - we make moral beings out of our children.
I don't see how we can train an AI other than by raising them to adulthood in more or less the same way, and I don't see how AIs can create an original thought about our society before we do without such schooling...
Thank you again for this incredibly insightful follow-up! You've articulated a crucial distinction between the 'hard' science domains like Go and chemistry, and the fluid, ever-shifting landscape of human morality. Thank you for that, you are right, whilst the developers of AlphaGo likened the move to intuition, it is more probable that intuition in this case was hard rules and experience (which I suspect Herb Simon and Danny Kahneman would have agreed with)
Your point about the inherent randomness and variability of human morality is true. The example of the shifting morality in Germany between the Weimar Republic and Nazi rule powerfully illustrates this, it reminds me of the book
Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland by Christopher Browning. The idea that AI would need to 'learn' and predict these changes, essentially becoming a student of human moral evolution, is a fascinating and challenging concept - I know the labs work on this, but then you get Anthropic claiming they are "growing a brain" and do not understand how it is evolving - so the challenge will come how to control what you do not understand!
I completely agree that the nuances of human interaction, like the children's toy example, are incredibly difficult to codify. Our responses to such situations are shaped by a complex interplay of cultural norms, personal experiences, and individual temperament. The idea of 'raising' an AI to understand these nuances, as you put it, is a compelling analogy, and I know people like Gary Marcus and Yann LeCunn promote this idea, maybe Ilya Sutskeyver will build this?
That is a fundamental limitation of AI, its reliance on past data in a domain where the rules are constantly changing, but then we also have a lot of synthetic data being developed, I wonder where that takes us. True, for now whilst AI can excel at pattern recognition within established systems, it struggles with the unpredictable and subjective nature of human morality and indeed sometimes even context as I wrote about my groups experiments yesterday.
This raises a profound question: can AI ever truly understand or predict human behavior in its entirety, or will it always be limited by its inability to grasp the complexities of our moral compass? This week I read a comment 'I'm more concerned about human morality, than AI morality." which is a fair reflection, nevertheless we must do more to solidifying AI morality.
Oh, and if anyone can close in on human morality, I think it's Ilya Sutskeyver in his lifetime. Brilliant man...
You too - thank you for the exciting follow up! Fascinating stuff, isn't it?
In that context - I'm particularly interested in the progression of predictability into chaos.
We talked about chemistry and AI finding new proteins with certain properties from scratch. We have nailed down, scientifically, in laws of physics, laws of chemistry and advanced mathematics how atoms and molecules behave under certain physical conditions, or how some molecules behave when combined in quaternair protein structures. We can feed such info to an AI, and then that AI can predict new proteins with certain properties. But that is solely because atoms and molecules will always behave that exact same way under the same physical conditions. They are bound by strict scientific rules. All biochemistry - life - depends on that predictability.
And then there is the Brownian Motion. We put water in a bowl and add a tiny particle of grain to it. We're still talking molecules, simple ones, actually, water molecules, but the movement of that grain particle is chaotic, and cannot be predicted, not by us, nor by any current or future AI, not in a trillion years. It is mathematically impossible to predict the movement of that particle.
Somehow a new property has been added to the system, one that is inherently unpredictable.
We know fairly well how clouds form, and storms, and why it starts raining. But it is mathematically impossible to predict the weather with any kind of certainty beyond sixty percent a week from now - and we will never, not in a million years be able to do that.
Chaos has been added to the system.
You know, one of most awesome experiences I've ever had was when one of my biology professors entered the classroom with a bleak, ugly looking plant in one hand, and an exquisite, beautifully formed colored one in the other. Then he started talking about the exquisite one - that it was perfectly adapted to its environment, was supremely efficient at photosynthesis because it lived in a biotope with continual mist, and was pollinated by a series of specialized species of bees. The other plant was rather ordinary, wasn't particularly good at photosynthesis, could survive in dry and wet biotopes, and could be pollinated by a score of insects. Then he asked which of the two was still around. The latter, of course.
The better any species is adapted to its environment, the shorter its survival on this our planet, because it can take one million years, or twenty million, but environments change and eventually disappear. Always.
The Earth as a biological system is inherently chaotic.
I think that the human mind, and human culture, have an attribute of chaos to them, because it is chaos that is the enemy of specialization, and specialization leads to extinction.
This raises the question, not whether an AI will ever be better at predicting the outcome of human endeavors, but whether making predictions is at all possible beyond a certain time horizon.
I think it is not.
It is my strong conviction that no one could have foreseen that a market seller, frustrated by, and angry about, having to pay corrupt police officers, would set himself on fire and thereby instigate the Arabic Spring. At most, people might have felt that things were brewing, or about to explode sometime in the future. But that self-burning was, in a way, an unforeseeable, freak event. Random. Chaotic in nature.
I think that AIs will fall victim to the same law of nature that rules us all - at a certain point of complexity, chaos is added to the system (human culture), making that predictions can no longer be made with any degree of certainty.
This notion is, especially for computer coders, difficult to live with, because computers are defined by the fact that they will always give the same outcome when running the same code. I mean, that is their very function.
An AI, trained on a dataset that is complex enough to have attained the attribute of chaos, might not give the same answer two days in a row.
I think the future of AI lies not in single, independent AIs, trained with ever more complete and complex datasets, on ever more powerful computers (quantum computers), but in combinations of AIs trained on the same datasets, running parallel and communicating amongst themselves, to be able to ignore the outliers when considering a certain problem, or, to neutralize the one AI that 'just isn't right in the head.'
They might be able to predict with a fair amount of certainty the number of traffic casualties when car producers are forced to implement certain safety measures. But then they will probably be completely wrong, because humans, knowing the car is safer, will start driving twice as fast...
You've raised some truly fascinating points. Your examples of Brownian motion and weather patterns perfectly illustrate the inherent limitations of predictability in complex systems. The biology professor's example is a powerful analogy, and you are right the Arab Spring example is very telling.
I have not thought enough about AI encountering the same limitations as humans due to the introduction of chaos in complex datasets, 'coders mind on this,' but it is a crucial consideration.
I wonder how the ensemble methods to get probabilistic forecasts of DeepMind's weather forecasting will play out long term, yet, I sense you are right on the accuracy and consistency. Self-driving cars have to deal with unpredictable road conditions, other drivers, and weather. AI is crucial for robust perception and real-time adaptation, yet it is not tried and tested in many diverse weather areas. I know that DeepMind are working towards building AI that can handle uncertainty and adapt to change. Your point of "AIs trained on the same datasets, running parallel and communicating amongst themselves" is probably the most plausible.
Gleick's book on Chaos is a favorite, I must dust it off.
Hey Colin, good post, per usual.
I've got a personal experience with AI and critical thinking that might interest you. I wanted to practice thinking about international relations using Bayesian reasoning. I began working with ChatGPT to do so.
At first ChatGPT gave me all the answers. Not very helpful, since I didn't already have the training to keep up with all it was saying, nor to question its output. But then I began to have it guide me through exercises one question at a time. This was much better. I had to think through the problems step by step, and on more than a few occasions I ended up in long back-and-forths about the qualities of evidence and so forth.
These back and forths were like the ones I have with my students -- spontaneous, driven by skepticism or curiosity about a claim. The only time I found the AI could not help but stumble over itself was when I got into a debate with it about the labour theory of value...
Anyway, I at least feel like I've gotten a lot out of it. Now I regularly think like a Bayesian when reading the news and so forth. I am also having fun applying it to historical counterfactuals, too.
In short, I agree with your article -- thinking people will need to wary of trusting the machine. That said, my experience suggests there are already some pretty simple ways to use it that can help. Asking it to slow down and to proceed with the aim of having me participate in thinking with it has made my using the service much much better.
PS: Good way to design a workout routine. Tell ChatGPT your goals, then have it interview you, one question at a time, about your current fitness and time constraints. I've been using the workout regimen it designed for me for over a month now, and it is great.
Cheers!
Hi Zane. Thank you.
Your example is excellent and exactly the way that we should be using LLMs. And absolutely, we should be teaching this method to students too - it prevents laziness and atrophy. I know some education LLMs are working toward this and you can set up a GPT for usage this way too.
What a terrific outcome to think like a Bayesian.
I should try your tip on fitness, I have triathlon goals for June and just track them via Garmin, but having an accountability partner would be very useful.
Getting to the heart of the matter; in the field of human (and humane) thinking, the challenge posed by AI has been very clearly articulated.
Thank you Joshua, unfortunately there is no putting it back in the box - but we need to raise concerns about how the box is built and its impact!