"Sharpen our critique" yes yes more of this please God. I am wearying of circa 2022 or 2023 pieces appearing in the NYTimes now, in 2025, with the same old tired laments.
It’s the relationship piece that makes me the most nervous, especially with young people. We already, as a society, spend so much time online, that the allure of having “someone” out there in the void to walk you through life at the expense of human relationships is troubling to say the least. On the critique side, AI can already do some of those things (I.e. identify unspoken assumptions and stakeholders, etc…) IF you ask it to. But the sychophancy issue is a real problem, even with experienced users. I’ve found some workarounds but it’s very, very easy to fall into the trap. Who doesn’t like being agreed with? Thx for your thoughtful posts.
Thank you Stephen. You have highlighted two currents that I find concerning. The prospect of frictionless companionship replacing the 'mess and maintenance' of human relationships is a profound social risk.
And your distinction regarding critique is precise. The potential for critique exists, but the incentive structure of the tool pushes relentlessly towards sycophancy. The user has to provide all the friction and discipline. Your own experience with finding workarounds confirms this perfectly, it's a constant, conscious effort.
It makes me wonder: what would it take to build an AI where provocation, not agreement, was the default? I suspect the user numbers would be a fraction of what they are now.
I have custom built GPT's specifically designed to challenge my thinking, question my logic, offer alternatives, etc.... They are versions of George Costanza's Opposite Day to counter sycophancy. There are lots of people using AI like this in various fields - you don't hear much about it in education. I agree that time spent on LLM's would likely go down though I fear that the number of users is here to stay. It is insidious that the design, just like all the other algorithmic platforms (Netflix, online games, Instagram, etc...) is to encourage engagement and, therefore, time on the platform. The irony, of course, is that these companies are hemorrhaging money with every query so it's actually slightly counter-productive to keep people mindlessly using them.
The 'George Costanza's Opposite Day' GPT is the perfect name for a sycophancy-killer; it’s a fantastic, practical application of the 'provocation engine'. IIt is a great idea to have created such GPTs.
Your observation that this kind of critical use is largely absent in education is both true and deeply concerning. It highlights how the path of least resistance becomes the default path for most. I ran training programs for a large number of Professors and staff at the University of Warsaw and suggested GPTs, but only a handful implemented them.
And the economic irony you point out is a brilliant twist. Unlike social media, where engagement is pure profit, here it's a direct cost. It suggests this current 'free-for-all' or low cost is a temporary and unsustainable phase, likely a land grab for users before the true cost is passed on, and the models are further optimized for low-cost, superficial answers rather than deep, provocative thought.
The most balanced, non-luddite treatise I've heard to date. Brilliant. Our "overlords" are unchecked. Ai is unchecked. Ai is in the hands of Our "overlords". What could possibly go wrong???
Thank YOU so much Leah. Your rhetorical question, 'What could possibly go wrong???', is perhaps the most important question of our time. The power structure is what makes the 'softer' issues of cognitive decline and outsourced judgment so critical.
An unchecked tool in the hands of the 'overlords' is a classic threat. What feels new is the risk that the rest of us slowly forget how to check power in the first place, because we've allowed the tool to do our thinking for us. It's a vicious cycle.
Thank you Cathie - the Orwell line captures the human impulses that will always seek out the most efficient tools of control.
'What answers will precede any questions as thinking is muted?' that's the perfect, haunting summary of the risk. It's a world where curiosity doesn’t even have a chance to form before it’s smothered by a readily available answer. It's exactly how the 'beautiful process of learning to live with questions' gets short-circuited.
What strikes me is how the advent of AI parallels the splitting of the atom. Fission provides a powerful energy source that produces no carbon compounds. It also makes possible construction of a doomsday device. Taking this analogy to it's logical next step brings us to the parallel of AGI with fusion. As always, the big problem is profit motive. What's most profitable is often not what's best for humanity. Can we put our collective foot down in time?
You are right. Both technologies forced humanity to confront the fact that our ingenuity had outpaced our wisdom.
The 'doomsday device' with AI, as my essay tries to argue, isn't a single mushroom cloud, but a slow-motion erosion of the very cognitive faculties we would need to manage such a powerful technology responsibly. And as you correctly identify, this erosion is accelerated by a commercial imperative that has little incentive to promote friction or deep thought.
As for your final question... I don't know if we can act in time. But I believe the attempt is a moral necessity. The first step in putting our foot down is to collectively agree that there are human values, like judgment, curiosity, and connection, that should never be subject to optimization.
Speaking for myself only, I will not personally contribute to weaponizing AI. Quite the contrary, I'll fight it tooth and nail anyway I can. When I refer to "they" I mean a group that excludes me. Hence, when I say "us", I mean some amorphous group that does include me.
Collectively, there will be some of us on one side, some of us on some other side, some on yet a third side, etc. For example, I would never say "we" are MAGAnuts - because not all of us are. Indeed, I'd say most of us aren't.
Thus, when I say "they", pertaining to the topic at hand, I mean the purveyors of controlling tech, especially the "dark enlightenment" types such as Andreessen, Theil, MuskRat, Zuck, Bozo Bezos, and including the likes of Vance.
This might all sound like semantical nitpicking, but it's important to identify the specific culpable parties who are actually doing the weaponizing. Most of us are the unwitting victims of it!
I agree with Winston on 2 and 3 - but on 1 - we must find an extra taxation for AI... does that mean we will? I think it will be the only option in order to provide a UBI
FFS - this is an incredible piece of writing. Thank you. Would be reassuring to confirm it’s not a carefully curated LLM output (as part of a cruel ironic jest).
Thank you Tom. I don't use LLMs on substack or for any of my writing, hence my grammar being questionable sometimes. I do use LLMs for programming extensively (we could argue it is a type of writing). But rarely for reviewing papers, reports and so on. Unless they are quarterly or annual reports and I want a deeper analysis of the financials.
What a wonderful piece/critique! I was especially struck by the line "But efficiency is not a moral framework. Nor is it a substitute for judgment." THANK YOU for finally saying that out loud! We've acted for decades (a century +) as though efficiency is among the greatest goods (in a moral not commercial sense) and thus should supersede care, grace, impact (human and environmental), judgement, etc. It seems as though AI could elevate efficiency even above truth and liberty. I would love to learn your thoughts about consequences - as in there doesn't seem to be any assigned to AI, as there would be to a negligent or bad human actor. And yet, we aren't treating AI as akin to an animal, to which we also don't necessarily assign consequences. Shouldn't a certain level of intelligence also carry responsibility and, with that, the establishment and enforcement of consequences? Again...thank you!
The topic of Luddites came up in another discussion last week, and I am copying and pasting my reply below from that discussion:
The book Blood in the Machine (link below) is an excellent resource for understanding the early Industrial Revolution and the motivations behind the Luddite movement. Contrary to the simplistic portrayal in popular media, the Luddites were not anti-technology. Instead, as Brian Merchant explains, they were resisting the exploitation and dehumanization that accompanied the unchecked use of industrial machinery. Their struggle was against a system where technology was wielded to maximize profits at the expense of workers' livelihoods, rights, and communities.
Far from being technophobes, the Luddites serve as early critics of exploitative systems—a perspective that remains incredibly relevant in our modern discussions about automation, artificial intelligence, and labor rights. Their fight wasn’t against innovation itself but against the ways innovation was used to concentrate wealth and power while displacing workers and dismantling social structures.
One of my favorite books, Why We Don’t Learn from History, points out that history isn’t about knowing what to do, as circumstances are always different, but about understanding what to avoid. This principle feels especially pertinent today. The lessons from the Luddites remind us to approach technological advancements with a critical eye, ensuring that progress is equitable and that innovation benefits society as a whole, not just those at the top. Their message underscores the need to prioritize human dignity and community well-being in the face of rapid technological change.
Thank you MG - I have written before about the history of the Luddite's using Eric Hobsbawm's precise analysis of the court records of the day - we have far too simplistic a view today. They burned barn sheds and other properties after a fair wage, not a revolt against the machines as you say.
The true Luddite spirit, as you describe it, resisting the system of exploitation wielded through technology, is precisely the perspective we need right now. Their struggle is a direct parallel to the concerns about AI's use for concentrating wealth and power today.
The principle from Why We Don’t Learn from History is also brilliant. 'Knowing what to avoid' is the perfect lens for this moment. The Luddite history teaches us to avoid a future where the immense benefits of a new technology are narrowly privatized, while the social and human costs are broadly socialized.
"Sharpen our critique" yes yes more of this please God. I am wearying of circa 2022 or 2023 pieces appearing in the NYTimes now, in 2025, with the same old tired laments.
Thank YOU Hollis - I agree we need a new narrative based on as much facts as possible.
It’s the relationship piece that makes me the most nervous, especially with young people. We already, as a society, spend so much time online, that the allure of having “someone” out there in the void to walk you through life at the expense of human relationships is troubling to say the least. On the critique side, AI can already do some of those things (I.e. identify unspoken assumptions and stakeholders, etc…) IF you ask it to. But the sychophancy issue is a real problem, even with experienced users. I’ve found some workarounds but it’s very, very easy to fall into the trap. Who doesn’t like being agreed with? Thx for your thoughtful posts.
Thank you Stephen. You have highlighted two currents that I find concerning. The prospect of frictionless companionship replacing the 'mess and maintenance' of human relationships is a profound social risk.
And your distinction regarding critique is precise. The potential for critique exists, but the incentive structure of the tool pushes relentlessly towards sycophancy. The user has to provide all the friction and discipline. Your own experience with finding workarounds confirms this perfectly, it's a constant, conscious effort.
It makes me wonder: what would it take to build an AI where provocation, not agreement, was the default? I suspect the user numbers would be a fraction of what they are now.
I have custom built GPT's specifically designed to challenge my thinking, question my logic, offer alternatives, etc.... They are versions of George Costanza's Opposite Day to counter sycophancy. There are lots of people using AI like this in various fields - you don't hear much about it in education. I agree that time spent on LLM's would likely go down though I fear that the number of users is here to stay. It is insidious that the design, just like all the other algorithmic platforms (Netflix, online games, Instagram, etc...) is to encourage engagement and, therefore, time on the platform. The irony, of course, is that these companies are hemorrhaging money with every query so it's actually slightly counter-productive to keep people mindlessly using them.
The 'George Costanza's Opposite Day' GPT is the perfect name for a sycophancy-killer; it’s a fantastic, practical application of the 'provocation engine'. IIt is a great idea to have created such GPTs.
Your observation that this kind of critical use is largely absent in education is both true and deeply concerning. It highlights how the path of least resistance becomes the default path for most. I ran training programs for a large number of Professors and staff at the University of Warsaw and suggested GPTs, but only a handful implemented them.
And the economic irony you point out is a brilliant twist. Unlike social media, where engagement is pure profit, here it's a direct cost. It suggests this current 'free-for-all' or low cost is a temporary and unsustainable phase, likely a land grab for users before the true cost is passed on, and the models are further optimized for low-cost, superficial answers rather than deep, provocative thought.
The most balanced, non-luddite treatise I've heard to date. Brilliant. Our "overlords" are unchecked. Ai is unchecked. Ai is in the hands of Our "overlords". What could possibly go wrong???
Thank YOU so much Leah. Your rhetorical question, 'What could possibly go wrong???', is perhaps the most important question of our time. The power structure is what makes the 'softer' issues of cognitive decline and outsourced judgment so critical.
An unchecked tool in the hands of the 'overlords' is a classic threat. What feels new is the risk that the rest of us slowly forget how to check power in the first place, because we've allowed the tool to do our thinking for us. It's a vicious cycle.
Great article...so much food for thought.
And there I was thinking my slow responses at school meant I was backward when I was really trying to work out my own answers, poor as they were.
What answers will precede any questions as thinking is muted, negating the “beautiful process of learning to live with questions?
Interesting connection to Orwell: he “did not predict the mechanism. He predicted the impulse.”
Very interesting article.
Thank you Cathie - the Orwell line captures the human impulses that will always seek out the most efficient tools of control.
'What answers will precede any questions as thinking is muted?' that's the perfect, haunting summary of the risk. It's a world where curiosity doesn’t even have a chance to form before it’s smothered by a readily available answer. It's exactly how the 'beautiful process of learning to live with questions' gets short-circuited.
What strikes me is how the advent of AI parallels the splitting of the atom. Fission provides a powerful energy source that produces no carbon compounds. It also makes possible construction of a doomsday device. Taking this analogy to it's logical next step brings us to the parallel of AGI with fusion. As always, the big problem is profit motive. What's most profitable is often not what's best for humanity. Can we put our collective foot down in time?
You are right. Both technologies forced humanity to confront the fact that our ingenuity had outpaced our wisdom.
The 'doomsday device' with AI, as my essay tries to argue, isn't a single mushroom cloud, but a slow-motion erosion of the very cognitive faculties we would need to manage such a powerful technology responsibly. And as you correctly identify, this erosion is accelerated by a commercial imperative that has little incentive to promote friction or deep thought.
As for your final question... I don't know if we can act in time. But I believe the attempt is a moral necessity. The first step in putting our foot down is to collectively agree that there are human values, like judgment, curiosity, and connection, that should never be subject to optimization.
A perfect and sobering analogy. Thank you
That's what scares me. Are we capable of collectively agreeing on much of anything?
Invariably not, although there must be some examples!
They...it's always they...when it means us...
We will weaponize AI...just like we have weaponized our use of language...not sufficiently woke to the fact.
Speaking for myself only, I will not personally contribute to weaponizing AI. Quite the contrary, I'll fight it tooth and nail anyway I can. When I refer to "they" I mean a group that excludes me. Hence, when I say "us", I mean some amorphous group that does include me.
Collectively, there will be some of us on one side, some of us on some other side, some on yet a third side, etc. For example, I would never say "we" are MAGAnuts - because not all of us are. Indeed, I'd say most of us aren't.
Thus, when I say "they", pertaining to the topic at hand, I mean the purveyors of controlling tech, especially the "dark enlightenment" types such as Andreessen, Theil, MuskRat, Zuck, Bozo Bezos, and including the likes of Vance.
This might all sound like semantical nitpicking, but it's important to identify the specific culpable parties who are actually doing the weaponizing. Most of us are the unwitting victims of it!
Very well said - let's not be victims
Thank you for fleshing out even further what/who "they" refers to...
Will AI pay taxes?!
Will AI use/appropriate the limited natural resources currently available for its own use/growth...forcing more hardships on the poorer taxpayers?!
Will AI build its own water and power supplies?!
1. Not likely. Neither will its purveyors/pushers.
2. Most definitely.
3. Eventually.
I agree with Winston on 2 and 3 - but on 1 - we must find an extra taxation for AI... does that mean we will? I think it will be the only option in order to provide a UBI
Especially on the purveyors/pushers. They're the ones on top of the top. They can well afford it without feeling a twinge of legitimate pain.
Wonderful article! And thank you for also giving real directional changes that could be made.
We are enamored with the ease and efficiency but I don’t hear enough voices asking what the real cost is!
A tool that undermines our capacity to think critically isn’t a tool but a weapon.
FFS - this is an incredible piece of writing. Thank you. Would be reassuring to confirm it’s not a carefully curated LLM output (as part of a cruel ironic jest).
Thank you Tom. I don't use LLMs on substack or for any of my writing, hence my grammar being questionable sometimes. I do use LLMs for programming extensively (we could argue it is a type of writing). But rarely for reviewing papers, reports and so on. Unless they are quarterly or annual reports and I want a deeper analysis of the financials.
What a wonderful piece/critique! I was especially struck by the line "But efficiency is not a moral framework. Nor is it a substitute for judgment." THANK YOU for finally saying that out loud! We've acted for decades (a century +) as though efficiency is among the greatest goods (in a moral not commercial sense) and thus should supersede care, grace, impact (human and environmental), judgement, etc. It seems as though AI could elevate efficiency even above truth and liberty. I would love to learn your thoughts about consequences - as in there doesn't seem to be any assigned to AI, as there would be to a negligent or bad human actor. And yet, we aren't treating AI as akin to an animal, to which we also don't necessarily assign consequences. Shouldn't a certain level of intelligence also carry responsibility and, with that, the establishment and enforcement of consequences? Again...thank you!
The topic of Luddites came up in another discussion last week, and I am copying and pasting my reply below from that discussion:
The book Blood in the Machine (link below) is an excellent resource for understanding the early Industrial Revolution and the motivations behind the Luddite movement. Contrary to the simplistic portrayal in popular media, the Luddites were not anti-technology. Instead, as Brian Merchant explains, they were resisting the exploitation and dehumanization that accompanied the unchecked use of industrial machinery. Their struggle was against a system where technology was wielded to maximize profits at the expense of workers' livelihoods, rights, and communities.
Far from being technophobes, the Luddites serve as early critics of exploitative systems—a perspective that remains incredibly relevant in our modern discussions about automation, artificial intelligence, and labor rights. Their fight wasn’t against innovation itself but against the ways innovation was used to concentrate wealth and power while displacing workers and dismantling social structures.
One of my favorite books, Why We Don’t Learn from History, points out that history isn’t about knowing what to do, as circumstances are always different, but about understanding what to avoid. This principle feels especially pertinent today. The lessons from the Luddites remind us to approach technological advancements with a critical eye, ensuring that progress is equitable and that innovation benefits society as a whole, not just those at the top. Their message underscores the need to prioritize human dignity and community well-being in the face of rapid technological change.
https://www.amazon.com/Blood-Machine-Origins-Rebellion-Against/dp/0316487740/ref=mp_s_a_1_1?crid=ZVRBKMGDC8AG&dib=eyJ2IjoiMSJ9.ZEA8VbVQuIXj1wEOFuiFMGr28nLVqtiIITM-p7uZYmc.1YrxgWrlAqEBO1qLI9kSMtSbnX0r4BOt_8cWzehuzqw&dib_tag=se&keywords=Blood+in+the+Machine%3A+The+Origins+of+the+Rebellion+Against+Big+Tech+Brian+Merchant&qid=1752937201&sprefix=blood+in+the+machine+the+origins+of+the+rebellion+against+big+tech+brian+merchant%2Caps%2C76&sr=8-1
Thank you MG - I have written before about the history of the Luddite's using Eric Hobsbawm's precise analysis of the court records of the day - we have far too simplistic a view today. They burned barn sheds and other properties after a fair wage, not a revolt against the machines as you say.
The true Luddite spirit, as you describe it, resisting the system of exploitation wielded through technology, is precisely the perspective we need right now. Their struggle is a direct parallel to the concerns about AI's use for concentrating wealth and power today.
The principle from Why We Don’t Learn from History is also brilliant. 'Knowing what to avoid' is the perfect lens for this moment. The Luddite history teaches us to avoid a future where the immense benefits of a new technology are narrowly privatized, while the social and human costs are broadly socialized.
But the dumbing down is a big point too!