My earlier post on the brilliance of Stanisław Lem is here.
There are books that predict, books that warn, and books that philosophize. Then, there is Stanisław Lem’s Summa Technologiae (Compendium of Technology), a work that does all three with the level of mischievousness Lem was renown for, dissecting the human technological trajectory with his sharp pen of wit and dread. Written in 1964, its title nods to Thomas Aquinas’s Summa Theologica, an attempt to categorize and make sense of divine knowledge. But Lem, a skeptic, constructed instead a secular, and at times blasphemous, Summa for the age of machines.
His critique of human cognition is deeply tied to his skepticism about the ability of humans to govern the technological systems they create.
He writes that the rapid acceleration of scientific and technological advancements results in an overwhelming flood of information that exceeds human cognitive capacities. Instead of fostering deeper understanding, this flood leads to a form of intellectual paralysis, where decisions are either outsourced to machines or made based on incomplete comprehension. Lem warns that this could result in a “dumbing down” of human thinking as individuals rely increasingly on automated systems for decision-making.
Lem’s perspective on the decline of human thinking is not purely dystopian; rather, it is a logical extrapolation of current trends in technological and cognitive evolution.
To read Summa Technologiae must feel, at times, like one of my students in my fevered lectures, where I oscillate between fascination with AI and dismay at the potential downside impact on society and work.
Acceleration of Machines
Lem, famously contemptuous of conventional science fiction, dismissed its tendency to place rockets and ray guns at the center of speculative thought. Instead, he charted the philosophical terrain of technological evolution, relying on detailed scientific research, confronting its blind spots and as he put it ‘terrifying inevitabilities.’
Human intellectual engagement may decline as a result of advances in machine learning, with individuals becoming passive consumers rather than active thinkers.
His treatise anticipated, with eerie precision, the arrival of artificial intelligence, search engines, the singularity, virtual reality (which he called “phantomatics”), nanotechnology, cognitive enhancement and the existential dilemmas of machine-driven progress. The book is both a requiem for human exceptionalism and a guidebook for the post-human condition, a condition Lem suspected we were hurtling toward without a seatbelt.
Technological Acceleration
But what does this mean for us today? Lem’s warnings are not mere intellectual exercises; they force us to confront our own complicity in technological acceleration. As he observed,
“The problem is not that we do not know the future, but that we do not even know what questions to ask about it.”
This is the true weight of Summa: it does not simply predict, it challenges us to interrogate our own role in the transformation of intelligence, identity, and agency. It is one thing to fear the power of machines, but another to recognize that we might be programming our own obsolescence. Incidentally, years later, but more widely known, the same point was hinted at by the brilliant Douglas Adams in the Hitchhikers’s Guide to the Galaxy, perhaps we are asking the wrong question.
At the heart of Summa Technologiae is a terrifying, or liberating, proposition: the possibility that humanity might be an evolutionary midpoint, rather than an apex. Lem was keenly aware that biological and technological evolution were entwined, two strands of DNA spiraling toward an unknown destination. He saw, in our relentless pursuit of innovation, the seeds of our own obsolescence. Lem wrote with a hint of irony that barely concealed the grim reality beneath.
“After several painful lessons, humanity could turn into a well-behaved child, always ready to listen to the (machine’s) good advice,”
This idea finds resonance in contemporary debates about factors such as AI and the dreaded Singularity, where the boundary between human and machine is increasingly blurred. Are we, as Lem suggests, grooming ourselves to be subservient to the very intelligence we are creating? What are the moral and existential implications of delegating decision-making to entities that operate outside the boundaries of human emotion and ethical reasoning?
Unintended Consequences
Lem understood that evolution, both biological and technological, was not a march toward perfection, but a series of improvisations, false starts, and unintended consequences. The highly respected Miles Brundage, former Head of Policy Research, and Senior Advisor for AGI Readiness at OpenAI wrote on Thursday March 13, 2025:
“Some people have asked me some variant of this question lately: ‘When will we get really dangerous AI capabilities that could cause a very serious incident (billions in damage / hundreds+ of people dead)?’ Unfortunately, the answer seems to be this year, from what I can tell.”
Miles means deliberate misuse by a bad actor.
The human species, Lem noted, was not the crowning achievement of nature, but rather a kludge, a patchwork of evolutionary hacks that happened to function just well enough to survive. Technology, which humans like to believe they control, might be no different. It advances not because of careful foresight, but because of blind momentum, accumulating power and autonomy in ways that are only dimly understood by its creators. The very act of technological progress might contain within it a structural inevitability: the eventual irrelevance of its originators. This idea aligns with discussions in artificial intelligence ethics, where concerns about control and unintended consequences echo Lem’s apprehensions.
Virtual Unreality
This should not be mistaken for nihilism. Lem was not merely a prophet of doom. Summa Technologiae is also a meditation on possibility, an exploration of how human ingenuity might expand beyond the confines of biology. One of its most radical sections is its discussion of phantomatics, the creation of artificial realities so complete that they are indistinguishable from the real. Decades before the rise of virtual reality, Lem envisioned a world in which experience could be engineered, where the laws of nature could be rewritten at will.
“It is easy to predict that if one could simulate a world with all its properties, then this substitute would become indistinguishable from reality itself,”
…he observed, decades before The Matrix turned this notion into pop culture.
Yet, unlike the utopian fantasists of Silicon Valley, he was deeply skeptical of this development. He understood that every technological liberation carried within it a hidden constraint. If we could construct perfect realities, would we ever choose to leave them? If our desires could be fulfilled at the push of a button, would we become something less than human? In Summa, he warns,
“The greatest danger is not that machines will rebel against us, but that we will come to prefer the artificial over the real.”
This insight serves as a critique of the modern attention economy, where immersive digital experiences increasingly replace tangible human interactions. In a world dominated by curated online personas and algorithmic feedback loops, Lem’s vision is less speculation and more indictment.
Decline of Human Thinking
Lem was particularly unsparing in his assessment of artificial intelligence. Long before the age of deep learning and neural networks, he anticipated a world in which machines could outthink their creators. But rather than engaging in the quaint anxieties of killer robots or sentient androids, he pondered something far more insidious: the moral and philosophical implications of intelligence without consciousness. He speculated about machines that could generate ideas and solutions beyond human comprehension, raising the unnerving question of whether human thought might become obsolete. To paraphrase Lem from his 1966 masterpiece. ‘Intellect may exist without wisdom, just as strength may exist without responsibility.’
If an artificial mind could outperform the human brain in every meaningful way, would it still make sense to speak of “intelligence” as a uniquely human trait? This question remains at the core of AI ethics today, as policymakers, technologists, and philosophers struggle to define the parameters of artificial cognition.
Ever the Scientist
The unsettling brilliance of Summa Technologiae lies in its refusal to offer comfort. Lem does not suggest that technological progress can be halted, nor does he advocate for any Luddite retreat. He sees the trajectory we are on as both inevitable and uncontrollable, an avalanche that humanity started but cannot stop. Yet he also refuses the romantic notion that technology will save us from ourselves. If anything, he warns that it will amplify both our ingenuity and our folly.
“A hammer is neither good nor evil," he wrote, "but in the hands of a fool, it may build or destroy with equal indifference.”
If Lem’s vision seems pessimistic, it is only because he believed that false optimism was more dangerous than honest skepticism. He was, in the end, an ironist. He saw in humanity both brilliance and absurdity, a species capable enough to create powerful tools and naive enough to trust them implicitly.
Summa Technologiae is a masterwork, not just because it was prophetic, but because he was willing to look directly into the abyss of the future and describe what he saw without flinching. That abyss, as Lem understood, was not a world ruled by malevolent machines, nor a utopia of infinite technological bliss. It was simply a world in which humanity had become a small player as the machines evolved.
The ultimate question Lem raises is whether humans can retain meaningful agency in a world increasingly dominated by machines that think faster, process more information, and operate at levels beyond human comprehension.
One thing is for sure we need to build cognitive security against what we are brain fed and from which medium.
Stay curious
Colin
We can't know the right questions about much of anything until after the fact. Hindsight is the greatest of teachers.
The question of human agency is especially daunting. To what extent, if any, do we really have agency? Researchers have already found that we tend to act on instinct, and then rationalize it as making a choice.
We're careening into some unknown high tech infused future mainly because a small number of overpowered oligarchs are profiteering from it. What's happening is far over the heads of politicians, as well as the average citizen, so there are no guardrails.
Personally, I believe we're still a ways off from the total collapse of human intelligence, although the curve is clearly bending in that direction. If/when that time arrives, the last remaining humans will be little more than vegetables. Even now, some folks consider themselves technologically savvy because they know which icons to tap on their phone.
Will Charles Lindblom's "The Science of Muddling Through" (HBR 19(2), 1959) be enough to save the day with respect to the power of AI concentrated in a few hands, whose agenda is not totally clear but one suspects is world domination through technical means?
In every 'crash' in every civilisation's history, there is always a remnant who survive and eventually prosper again. Myths and legends carry messages from those times before, as well as do ancient indestructible temples built with technological capability that far surpasses our own. It appears we learn very little from history, which is why it repeats itself.
Lem's book is amazing, 60 years on. But smokers still smoke, knowing it's killing them. Knowledge of the downside is not enough. There has to be something much bigger that grabs the human heart to live for, rather than just to avoid. I believe that that 'something to live for' can be found in the rediscovery of our indigenous (autochtonous) roots - something noted in Veronika's post for tomorrow (Saturday 15th March), the introduction to her book on Synchronosophy.