We Need to Strengthen, Not Erode, Thinking
Edsger W. Dijkstra - AI Will Reshape Critical Thinking
The vast majority of the human race uses some form of computer, indeed you are reading this on a screen, yet very few know of the great founders of computer science (CS) or their warnings. I have mentioned a few in these Substack posts and believe it behoves us to get to know others, albeit briefly. This is an introduction of one such CS pioneer and how his warnings about the impact on critical thinking and the use of computer tools, especially AI, is coming true.
“The tools we use have a profound and devious influence on our thinking habits, and therefore on our thinking abilities.”
When I was setting out on my journey to learn to program and about AI in general, the Dutch seemed to have remarkable influence, it was after all a Dutchman who invented Python, I decided to relocate to The Hague, Holland. I settled there just after the turn of the Milenium, driven in part by my curiosity of what it was about Holland that developed such great pioneers of computer science and to build my own programming and especially AI skills.
Clarity of Thought
During my programming education I had read the works of another Dutch computer scientist who was singularly influential and yet, as I was to discover, insistent on intellectual rigor, Edsger W. Dijkstra (Ed). Donald Knuth stated “It is impossible to read the recent Dijkstra, book Structured Programming, without having it change your life.” In my view, and something I have taught students ever since, he was a prophet of precision, a master craftsman of the theoretical edifice upon which modern computing rests. To consider his legacy is not merely to list his contributions, although they are many; Dijkstra’s Algorithm, semaphores, structured programming, but to wrestle with the spirit of his work. A relentless pursuit of clarity in thought and expression, exactly how programming, indeed any language, should be.
Early Life
Dijkstra’s journey began in Rotterdam in 1930, a city then teetering on the precipice of war. It is tempting to draw some connection between the devastation of his youth and his later insistence on structure, order, and discipline in computation, though he himself would have dismissed such psychoanalytic speculation as unscientific drivel.
He entered the University of Leiden with the original intention of becoming a physicist but found himself drawn into the nascent world of computing at the Mathematisch Centrum in Amsterdam. At a time when programming was as much an art as a science, he once said 'akin to medieval alchemy', Dijkstra sought to transmute the field into something systematic, rigorous, and elegant.
Storms
His influence is most clearly visible in his disdain for unnecessary complexity. His seminal 1968 letter, Go To Statement Considered Harmful, set off a firestorm, challenging an entire generation of programmers to abandon the chaotic, tangled webs of unstructured code in favor of something resembling mathematical proof. The opposition was swift and fierce. Another pioneer, Donald Knuth, never one to shy away from an intellectual brawl, wrote an essay in response: Structured Programming with Go To Statements. But in the end, time was on Dijkstra’s side. Today, structured programming is the norm, and the dreaded ‘goto’ statement has been largely relegated to the dustbin of history.
Dijkstra was not simply an inventor of tools, he was a philosophical thinker who saw programming as an intellectual discipline. In A Discipline of Programming, he argued that programming should not be about trial and error, nor about intuition, but about systematic, formalized reasoning. This was not just a call for better software; it was a call for a new way of thinking.
“The art of programming is the art of organizing complexity, of mastering multitude and avoiding its bastard chaos as effectively as possible,” he once wrote, emphasizing the mathematical precision needed for programming to be elevated beyond mere hacking.
For his monumental contributions to the field, Dijkstra was awarded the 1972 ACM Turing Award, the most prestigious honor in computer science. The citation for the award reads:
“For fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness.”
His work paved the way for what would later be known as software correctness and formal methods, an approach that placed logic and mathematics at the heart of programming.
His axioms and papers would become central to the formal methods movement, urging software engineers to prove correctness rather than merely test for failure. He held that
“simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better,” a wry commentary on the state of modern software.
Blunt to a Point
He was, at times, austere, his writing meticulously crafted, his opinions razor-sharp and blunt (as many Dutch people are known to be), his expectations of intellectual rigor unyielding. One of his more famous diatribes, On the Cruelty of Really Teaching Computer Science, lambasted the state of programming education, arguing that the true challenge of teaching was not in imparting knowledge but in undoing bad habits. Teaching, he believed, required a ruthless and uncompromising commitment to intellectual discipline, something few educators had the stomach for.
Beyond his technical achievements, Dijkstra’s influence was also personal, idiosyncratic. He did not use a computer to write, he preferred a fountain pen and an old-fashioned method of circulating his research notes, the now-famous EWD reports, which found their way through an informal network of colleagues and acolytes. In an era when digital communication was beginning its inexorable rise, here was a man who believed in the tactile power of the written word, in the slow and deliberate act of putting pen to paper.
Dijkstra’s colorful personality extended beyond the academic sphere. He was known for his unconventional fashion, often wearing sandals and carrying a shoulder bag. He drove a Volkswagen camper van, which he named the “Turing Machine,” using it for both leisure and scientific trips. He could be stubborn and unyielding, particularly in his views on academic rigor, but he was also deeply passionate about music, playing the piano regularly, and even considered a career as a concert pianist before his path led him to computing.
Rude
His personality was as legendary as his intellect. In the realm of academic politics, he was both feared and revered. His wit was sharp, his criticisms often brutal. When a fellow scientist at MIT began to lose their voice during a lecture, Dijkstra very rudely quipped, “Thank God.” Such cutting remarks did little to endear him to those outside his inner circle, but they also underscored a deep frustration with what he saw as mediocrity masquerading as scholarship. About Ed’s abruptness, Alan Kay said “You probably know that arrogance, in computer science, is measured in nanodijkstras.”
Yet despite his arrogance and formidable intellect, or maybe because of it, Dijkstra remained, in many ways, a solitary figure. He had only four PhD students, a number he seemed to wear as a badge of honor. When once asked why, he replied with characteristic self-assurance: “Einstein had none.” He viewed scientific work as a sacred pursuit, untainted by bureaucratic concerns, unencumbered by the frantic need for funding and prestige that plagues modern academia. He never sought grants, never played the administrative game, avoided conferences. His concerns lay elsewhere, in the purity of thought, in the elegance of proof, in the beauty of an algorithm well designed.
Dijkstra’s philosophy extended to his views on software engineering. He lamented that:
“The tools we use have a profound and devious influence on our thinking habits, and therefore on our thinking abilities.”
Which brings me to my deep concern…
AI and The Erosion of Critical Thinking
I have regularly expressed two concerns about AI throughout my work and these Substack posts: AI as job displacer and AI as critical thinking destroyer. Of course, I also see the tremendous potential of scientific breakthroughs and productivity gains. But we must seek to manage the downside sooner rather later.
Ed would be horrified at the way AI is starting to impact our critical thinking skills. Human thought is the fragile yet powerful force that shapes history, guides innovation, and ultimately determines our fate. From the ancient Greeks, who dared to ask what constituted a good life, to the Enlightenment thinkers who sought to dismantle superstition with the sharp tools of reason, the journey of human thought has been one of both triumphs and missteps. We have built civilizations, unraveled the secrets of nature, and created art that captures the sublime, yet we have also fallen prey to the same errors of judgment, the same seductive illusions and biases, that have plagued us for millennia. The rise of artificial intelligence adds a new layer to this cycle.
A recent, very important, study, The Impact of Generative AI on Critical Thinking, (conducted by researchers at Carnegie Mellon University and Microsoft Research) unearths a contradiction. While these tools promise to enhance efficiency, they simultaneously risk atrophying the very faculties that make knowledge work valuable: skepticism, discernment, and the creative friction of intellectual struggle. The study, conducted across a diverse pool of 319 professionals, including educators, software developers, marketers, and analysts, maps a startling, yet not surprising, phenomenon:
...the more confident users are in AI’s capabilities, the less they engage in critical analysis. The higher their faith in AI-generated responses, the lower their inclination to challenge, verify, and refine.
“Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved.”
This raises urgent questions about a delicate trade-off, the efficiency gained from delegating our mental chores versus the maintenance, or even growth, of our critical thinking skills.
Echoes of a Historical Pattern
The alarm over technological dependency is hardly new. Socrates, in his distaste for the written word, feared that reliance on texts would erode human memory. Trithemius, railing against the printing press, lamented that books rendered monks intellectually indolent. Their anxieties were rooted in a fundamental concern: that externalizing knowledge to a tool, be it ink on parchment or AI-generated text, diminishes the internal faculties required for intellectual rigor. With generative AI, however, the stakes are unprecedentedly high. Unlike the printing press, which democratized access to static knowledge, AI produces dynamically generated content, content that adapts, persuades, and conceals its biases beneath a veneer of fluency.
The study positions AI as a cognitive intermediary, shifting knowledge work from execution to oversight. In doing so, it challenges the traditional model of skill acquisition. Critical thinking, once honed through repetition and correction, is now circumvented by instantaneously polished output. The risk, then, is not merely that AI influences how we think, it may subtly dictate whether we think at all.
The Survey’s Stark Findings
Confidence, Complacency, and Cognitive Offloading
A solid majority of participants felt that GenAI reduced their cognitive burden for tasks like recalling facts, translating or summarizing information, and synthesizing content. It effectively “automates the routine opportunities to practice judgment,” creating what the authors call:
“…a trade-off that may risk long-term reliance and diminished independent problem-solving.”
Crucially, the study identifies two competing forces: self-confidence versus confidence in AI. Those who trust their own abilities remain more likely to scrutinize AI-generated output. Those who place greater faith in AI exhibit a diminished tendency to verify, challenge, and refine. This cognitive trade-off is subtle yet profound, hinting at a possible long-term recalibration of professional judgment. When professionals stop testing the reliability of AI suggestions, the very concept of expertise is imperilled.
The Death of Diversity in Thought
An overlooked consequence of AI reliance is the phenomenon of “mechanized convergence.” The study notes that knowledge workers using GenAI produce less diverse solutions for the same problem compared to those who work unaided. This suggests an implicit narrowing of intellectual horizons, where AI, rather than serving as a multiplicity-generator, often leads users toward a homogenized, median response.
For example, AI-powered search engines increasingly prioritize widely accepted viewpoints, reinforcing confirmation biases rather than challenging users with alternative perspectives. Similarly, AI-generated legal or financial advice may settle on standardized responses that fail to consider subtleties of individual cases. The tragedy of mechanized convergence is that it corrodes the creative outliers that fuel innovation. When the boundaries of thought are defined by algorithmic suggestion, we risk a future where originality is algorithmically sanded down into palatable predictability.
Political and Social Implications
Beyond the individual cognitive shift, the paper hints at broader institutional ramifications. If entire sectors normalize AI-mediated thinking, decision-making itself risks becoming a passive function. The historian Richard Hofstadter once wrote:
“A certain whiff of unpredictability, even the intangible burr of error, may be precisely what fosters leaps of insight.”
AI, in its relentless efficiency, threatens to erase this burr. If businesses, legal systems, and governments increasingly defer to AI-generated analysis, we risk codifying bureaucratic inertia, where accountability is obscured behind a veil of algorithmic legitimacy.
More troubling still is AI’s potential to manipulate public opinion. Already, we have seen AI-driven content shape political discourse, amplifying biases, and tailoring messages to micro-targeted audiences. If AI-generated narratives become indistinguishable from human opinion, the very foundations of democratic discourse, debate, dissent, and deliberation, may erode under a flood of AI-synthesized consensus. Automation’s greatest gift, efficiency, might also be knowledge work’s Trojan horse.
Designing AI to Challenge, Not Replace, Thinking
Despite its foreboding conclusions, the study offers a blueprint for a more constructive AI future. If AI tools are designed not as cognitive crutches but as cognitive provocateurs, they could enhance rather than erode critical thinking. This means embedding mechanisms that encourage users to justify, refine, and interrogate AI output, prompting reflection rather than mere acceptance. We must now adapt AI with intentional designs that safeguard and even bolster our capacity for thoughtful inspection. The authors suggest a path forward by:
“…developing GenAI tools that foster user awareness, encourage motivation, and enable the user’s ability to think critically.”
One promising avenue lies in metacognitive AI, systems that do not simply generate answers but also generate questions. An AI that asks, “How do you know this is correct?” or “What are the assumptions behind this response?” might serve as a more valuable intellectual partner than one that delivers frictionless, pre-digested conclusions. AI-powered educational tools, for example, could train students to critically engage with information rather than merely consume it. A well-designed AI tutor might guide learners through Socratic questioning, strengthening their reasoning rather than merely supplying answers.
It is an imperative that everyone who uses Generative AI learns to use it correctly and checks the response. In the workplace AI responses should be randomly audited.
The High-Stakes Gamble
The study leaves us with an inescapable question: Are we outsourcing not just our work, but our thought?
“While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.”
If the history of automation is any guide, this shift will be neither straightforward nor entirely predictable. Some domains will flourish, leveraging AI as an augmentation of human ingenuity. Others may suffer from an intellectual ossification, where once-vibrant disciplines settle into AI-automated complacency.
In the words of John von Neumann, the architect of the modern computer:
“The human species has been subjected to similar tests before and seems to have a congenital ability to come through, after varying amounts of trouble.”
The challenge before us is to ensure that AI does not become a mechanism for passive deference but a tool for active deliberation. As knowledge workers increasingly place their trust in generative AI, we must remember that the most dangerous forms of cognitive decline do not announce themselves with fanfare. They arrive in the form of ease, convenience, and the slow fading of intellectual resistance. It is our curiosity, not our complacency that we need to nurture.
The antidote is deliberate intellectual engagement. This means resisting algorithmic determinism, challenging our own biases, and fostering spaces where independent thought is cultivated, not suppressed. If we do not act, we risk becoming passive consumers of machine-generated insights, prisoners of our own complacency.
Transparent AI
In their capacity as a tool, computers will be but a ripple on the surface of our culture. In their capacity as intellectual challenge, they are without precedent in the cultural history of mankind.
Dijkstra's philosophy could serve as a valuable counterpoint to the current hype and concern surrounding AI. His emphasis on clarity, simplicity, and rigorous thinking offers a much-needed corrective to the prevailing focus on complexity, speed, and short-term gains. He would likely argue that many modern AI systems are overly complex, making them difficult to understand, debug, and ultimately trust.
Instead of chasing the latest trends in deep learning, Dijkstra would advocate for a return to simpler, more elegant solutions, emphasizing the importance of clear, well-defined goals and transparent algorithms. He would likely criticize the current reliance on machine learning techniques that often involve training models on massive datasets without a deep understanding of the underlying principles.
He would argue for a more principled approach, where AI systems are developed based on sound theoretical foundations and rigorous testing. This would involve moving beyond black-box models and developing AI systems whose behavior can be understood and explained in a mathematically rigorous way.
Ed Dijkstra was one of computing’s true giants, a thinker whose legacy is not merely in the tools he built but in the way he reshaped the intellectual landscape of his field. J. Strother Moore, in his eulogy, put it best: “He was like a man with a light in the darkness. He illuminated virtually every issue he discussed.”
I am sure Ed would agree that the real test is not whether AI can think for us, it is whether we will still choose to think for ourselves.
Stay curious
Colin
Thank you for this very important post. It captured much more eloquently many aspects of AI I have been worried about. At times I think about this in terms of species of personality/character, that is, given that we seem to need more Ed Dijkstra-types in the world today, is his type an endangered species? Would types like him be able to flourish in “ecosystems” beyond a small and irrelevant circle of friends. I worry that in a world infected by a “virus” of AI dependency he would either go unnoticed or seem positively crazy and threatening, like the philosopher in Plato’s Cave. Thanks again! I’m going to share this widely.
This is the first article I've read of yours, and it's both well written and, well, informative and entertaining. I will certainly invest time reading your other articles. So, thank you!
I have, however, doubts about your concerns on AI. Whenever someone says to me: "You know, I was thinking--" I interrupt by saying, jokingly: "It hurts, doesn't it, thinking."
I'm sorry to say - most people are either incapable of critical thinking, or they are just fine letting others do their thinking for them. Case in point - a good third of Americans is now blindly following a man who is, by any critical measure, clearly deranged.
Which raises the question - for as far as AI manipulating public opinion is concerned, I think the manipulating is far less worrisome than the direction in which that opinion is manipulated. If only a majority of people were taught by an AI the current, lingering interpretation of democratic values, what, the now rapidly disappearing understanding of Democracy itself, we'd not be in the trouble we're in now.
It is, in my opinion, not so much AI manipulating public opinion that should trouble us, but the pervasiveness of aberrate median public thinking with which AIs are trained that should trouble us, which is another way of saying - AIs are merely a reflection of human thought at any given time before they enforce that thought by regurgitating it.
To say it in computer terms - garbage in, garbage out.
It is, in my opinion, far more critical we find ways to protect AI models from incorporating 'misinformation' than protect public opinion against being influenced by AI. I don't know how, but the companies building AIs should be forced to provide insight in the data they train their AIs with as well as train their AIs with a baseline of objectively 'true' information.
If we can somehow accomplish that, AI could actually function as a bulwark against the dangers of someone like Trump, the same way newspapers (under the condition, I must say, the same condition, that journalists try to provide objective news coverage) started protecting us by keeping those in power honest.
Finally, talking about the subset of humanity who can think critically - there too I don't share your concerns, simply because critical thinking is, for those who are capable of it, and enjoy it, a goal all onto itself. For those of us who like to think critically, it is a need to do so, not a burden that we would rather outsource to an AI, any more so than to other human beings.
Good lawyers will find new interpretations of the law to win cases, regardless of what an AI advises. Writers will write books, politicians will find original policy solutions, scientists will open up new worlds - because it is their nature to do so, regardless of AIs.
And for the same reason - AIs being a reflection of the current state rather than the creators of it - humans, in my opinion, will be better at original thought than AIs, at least as long as AIs need to be trained with human thought.
AIs will, at least for the foreseable future, always be one step behind...