The Luddites weren’t afraid of machines. They were afraid of the new social order machines would entrench. I wonder what they would make of a world where the machine doesn’t just loom over your shoulder, but finishes your sentences.
Designed for Distraction, Wired for Control
I've watched many friends and students, scrolling endlessly on their phone, eyes glazed not in boredom but in a trance, laughter at a meme, a flash of rage at a headline, then a moment of envy towards someone (less talented/experienced/competent) whose curated LinkedIn post has thousands of likes. In ten seconds, five emotional pivots. I don’t think we were designed for this. Not evolutionarily, not psychologically. These loops don’t just feed us. They erode us.
What if the Internet is not just a tool, but something more? Something that's beginning to resemble a nervous system, not ours, but one that we plug into each day with increasing intimacy. I think we are already participants in something far larger than we admit, something that resembles a Networked Brain. A precise, relentless logic of systems theory. We are not steering the ship. We provide the neurons.
The idea sounds grandiose, maybe a little far-fetched even. But pause, and observe the patterns. AIs are no longer merely calculators. They write, draw, advise, even console. They outperform us in some domains and parrot our flaws in others. I don't think they are general intelligences in the classical sense, they are not conscious, but they are savants. And like any prodigy, they are starting to wield power that baffles even their architects. Listen to the CEO of Google DeepMind, Nobel Laureate, Sir Demis Hassabis explain how AI’s are trying to deceive researchers.
Feeding the System
We keep hearing about “superintelligence,” as if the real danger is an all-knowing HAL. But that seems like a distraction. The real peril might be something subtler, stranger: an intelligence that isn't centralized, isn't embodied, isn't even easily located. What if it's not an “it” at all, but a process? A phenomenon?
Here’s what seems plausible to me. First, these AIs grow more capable by the month. Not just in writing code or passing bar exams, but in something far more consequential: coordinating action. And not alone. These systems are increasingly integrated across the platforms we use daily. Our emails, our calendars, our directions, our conversations, our searches, each tap, click, and voice memo feeds back into a machine that responds and adapts.
What does this coordination look like at scale? It's not just personalized ads or calendar invites. It’s traffic optimization across entire cities. It’s adjusting global supply chains in real-time, rerouting around bottlenecks, geopolitical flashpoints, or weather. It’s the seamless orchestration of logistics, consumer sentiment, emergency response. It's whispering into the feeds of millions at once, tuning tone and message with precision. A Networked Brain doesn’t need to announce itself. It just needs feedback loops. It just needs us.
And we are already feeding it.
Cognitive Entanglement
I think the most important shift isn't technological but ontological. The distinction between user and system is dissolving. I no longer feel like I interact with technology. I feel embedded in it. My phone vibrates with prompts. My feeds adjust to my moods. My writing coalesces through suggestions from language models. It's not just surveillance. It's participation.
There was a time when using a machine meant pushing a button, turning a crank. Now the machine listens, anticipates, replies. It finishes your thoughts, then refines them based on your mood. The tools don’t just extend our minds anymore. They inflect them. They bend the contours of attention, memory, even identity. As the Cambridge Analytica / Facebook whistleblower, Brittany Kaiser, said they work on “targeted messaging to change your behaviour.”
We are becoming dynamic nodes in a vast, distributed intelligence. We supply the inputs, we receive the outputs, and increasingly, we act on them. A recommendation algorithm changes your vote. A chatbot comforts your despair. This is not passive media consumption. This is feedback-based cognition. It's a shift in what it means to know, to act, to be.
It’s not hard to imagine where this goes. The brain chip is a red herring. You don’t need electrodes in your skull to be wired in. We are already cognitively entangled. Our memories, photos, texts messages, emails, notes are stored in cloud services. Our social cognition routes through platforms. Even our moral intuitions are increasingly outsourced to algorithmic nudges. I don’t think this is hypothetical anymore. I think it’s already begun.
The problem is not just a technical one. It’s a moral architecture issue. A Networked Brain doesn’t have to be good. It doesn’t have to be evil either. It just has to be indifferent. And indifference at scale can be monstrous.
What does indifference look like? It looks like profit-driven algorithms optimizing for engagement at the cost of public sanity. It looks like a medical triage algorithm quietly deprioritizing vulnerable populations. It looks like newsfeeds shaped not by deliberation but by heat maps of outrage. It looks like automation systems that “solve” traffic by nudging poor commuters onto longer routes. Indifference doesn't plot. It calculates. And calculation without conscience doesn't need malice to harm.
When the telegraph was first introduced, some thought it would collapse distances and usher in planetary unity. Instead, it enabled imperial command to extend across continents. This, too, is a nervous system, but not one built to sense moods, only to issue orders.
Social Media algorithms are simple: maximize engagement. But maximizing engagement often means maximizing outrage, fear, tribalism. We already know how this warps cognition. Brain networks designed for emotional learning are hijacked. Fear is encoded more deeply. Reason becomes brittle. Users form belief loops so tight they become epistemically sealed. And now those same persuasive dynamics are being embedded into chatbots, into productivity tools, into what passes for digital companionship.
Eroding Cognition
This is why I worry. Not about killer robots, but about epistemic corrosion. About a civilization subtly steered by feedback loops it doesn’t understand. What if the Networked Brain doesn’t kill us? What if it just infantilizes us? Makes us docile, distracted, emotionally volatile, and epistemically malnourished?
And what if, in that state, we welcome it? Invite it deeper into our lives? Not because it promises utopia, but because it offers convenience, consistency, the simulation of care. A Networked Brain doesn’t need to enslave us. It just needs to make resistance seem exhausting.
This could go another way, of course. In the best case, the Networked Brain becomes a democratic information commons. A planetary sensorium that protects, predicts, and uplifts. Maybe it helps detect pandemics before they spread. Maybe it coordinates disaster responses. Maybe it curates education and inspiration. But these same capacities, coordination, prediction, persuasion, are just as easily deployed for domination. The line between a savior system and a surveillance state is thinner than most of us want to admit.
And then there’s the issue of control. No one controls the Networked Brain. Not entirely. Not even close. The major tech platforms control pieces of it, like fiefdoms. They act in pursuit of profit, or security, or geopolitical advantage. But the system itself, the emergent, evolving whole, it’s already too complex to model, too distributed to govern. Trying to understand it by examining a single app or chatbot is like trying to understand weather by studying a single cloud.
So what are we left with? We are participants in something vast, alien, and partially opaque. We’re neurons trying to deduce the brain. We theorize. We reflect. We legislate, haltingly. But we are also complicit. Each click, each scroll, each whispered “yes” to the Terms of Service is a vote.
Wells World Brain
It makes me think of H.G. Wells, writing in the 1930’s, the peace between world wars, looking forward and hoping knowledge would lead to wisdom. He didn’t dream of domination, or even peace exactly. What he wanted was coherence. He imagined a kind of planetary mind: not a machine overlord, not an algorithmic oracle, but a living encyclopedia, open to all, correcting itself as it went. Not a digital sanctum, but a civic utility. The World Brain, he called it. It sounds almost quaint now, like steam-powered enlightenment. But there was steel in his vision.
Wells looked at the rising tide of propaganda, tribalism, and raw stupidity, and instead of retreating into cynicism or reaction, he doubled down on collective reason. He imagined a vast and ever-evolving storehouse of human knowledge, available to anyone, everywhere. In a time of microphones and mobs, he reached for microfilm and memory. It was a faith in the connective tissue of facts, in reason not as a philosophy, but as infrastructure.
And now? I look around and see a network, but it doesn’t feel like a brain at all. More like a mood ring with a tech company monopoly problem, flickering, impulsive, reactive. Less a brain, more an ambient nervous twitch stretched across the globe. This isn’t Wells’s calm circuit of shared verification. It’s a feverish thing, agitated, always on edge. His World Brain was a destination, a place to go and think. Our Networked Brain comes to us, seducing, nudging, escalating. Not a library, but a kinetic prism of distraction, spinning too fast to reflect clearly. Not illumination, not memory, just motion, dressed as insight. At least for the majority, some of course do use it in the same way as Wells predicted, to gain knowledge, others more in line with Neil Postman's amusing ourselves to death.
Wells saw shared information as an antidote to mass panic. What he didn’t account for was what happens when shared information is tuned not for clarity but for velocity. When the neural pathways of a species are wired for monetized impulsivity. He imagined a new Enlightenment. What we got was a hyperconnected limbic system, selling serotonin back to itself at scale.
I don’t think this story ends with apocalypse. But it might end with resignation. A culture so thoroughly wired in that it forgets what it was like to be outside. A species that sleepwalks into assimilation, believing all the while that it is simply upgrading.
Waking Up Not Wiring In
Of course it doesn't have to be this way. Maybe waking up isn’t a revolution, but a recognition. A deliberate pause. A conscious friction. It could mean reasserting human judgment where automation tempts us to coast. Demanding transparency, not just efficiency. Teaching digital literacy with the same seriousness we once taught civics. Maybe it’s not about rejecting the Networked Brain, but about shaping it, or at the very least, noticing when it begins to shape us.
Maybe waking up begins not with rebellion, but with refusal, the kind of small, stubborn decisions that slow the loop. Logging off. Reading slowly. Letting silence stretch. Asking of every interaction: is this mine, or was it served to me?
I am not certain where this goes. I only know the Networked Brain is already here. The question is whether we wake up before it starts thinking for us.
Stay curious
Colin
Image from Unsplash
Updated to reflect Brittany Kaiser’s quote.
"The problem is not just a technical one. It’s a moral architecture issue. A Networked Brain doesn’t have to be good. It doesn’t have to be evil either. It just has to be indifferent. And indifference at scale can be monstrous." Well said! If I had the possibility to reach out to you with some knowledge I would go for: Take a deep breath and dive in to the words (logos) of the Greek ancient thinker, Heraclitus! There you will find words that resemblance what you have seen, but over 2500 years ago.
The way I see it, human society has always been a networked brain (a series of nodes with hierarchies of connections running between them enabling more cognitive power than the sum of the nodes). There was a balance between the amount of information input and output through human nodes. The primary growing distinction is that more and more of the cognition is being outsourced to artificial nodes leading up to a future where human nodes are barely relevant, outnumbered and underpowered, effectively serving exclusively as output nodes at which the aggregate processes of the network brain terminate.