"The problem is not just a technical one. It’s a moral architecture issue. A Networked Brain doesn’t have to be good. It doesn’t have to be evil either. It just has to be indifferent. And indifference at scale can be monstrous." Well said! If I had the possibility to reach out to you with some knowledge I would go for: Take a deep breath and dive in to the words (logos) of the Greek ancient thinker, Heraclitus! There you will find words that resemblance what you have seen, but over 2500 years ago.
Thank you Max - great pointer about Heraclitus, I will look at that - any specific recommendations? Norman Sandridge also writes on ancient philosophers and modern day life - it is worth checking out his substack - https://sandridge.substack.com/
I just want to start off by saying that what initially seemed rather simple has, over time, grown somewhat more complex. I will try to explain.
To begin with, we must acknowledge that Heraclitus is most often interpreted through a much later philosophical lens. The dominant reading has long been a metaphysical one, shaped early on by Plato and the Stoics – and later reinforced by Christian thinkers, where his concept of *logos* came to be equated with a kind of absolute reason, or God.
I recently wrote a short piece about the evolution of breathing, the element of air, and how this bodily function became mystified over time – eventually contributing to the idea that we possess a soul. You can find that post here. And it works as a piece of the puzzle.
Heraclitus was, as far as we know (given that writing was still in its infancy), one of the first to actively reflect on human communication – our capacity for language. And it was this that he began referring to as *logos*. Today, this *logos* forms the very foundation of what we call intelligence, and consequently also artificial intelligence.
My own reading of Heraclitus is closer to what some have called a "Protagorean perspective". By that I mean that I’ve tried to strip away later metaphysical overlays.
Since Protagoras was, according to Plato, the thinker who most faithfully carried Heraclitus' thought forward, I chose to apply his more empirical and unadorned approach to the fragments we have access to today.
I wrote a MA thesis on this topic back in 2004, hoping to develop it into a dissertation. Unfortunately, my perspective didn’t gain much traction within the institution where I was studying, so the initiative stalled there.
Today, however, I daily use AI in both research and writing – and I’ve begun to notice a pattern in how AI processes language that, in my view, resembles the very linguistic structure Heraclitus perceived.
One could even say that a language model like ChatGPT operates according to a kind of technological *logos* – where meaning arises from relationships rather than from any stable subject. It’s not a voice that speaks, but a language that works itself out.
So, in short: I don’t really have an author to recommend, as I’m not aware of anyone else who’s followed this line of thought.
But if you’re curious about the conclusions I have drawn, there’s an English translation of my thesis (originally written in Swedish) available on academia. It’s not technical in nature – more of an essay, really.
That’s, unfortunately, all I can offer at the moment.
But what I do want to underline is this: follow the logos. It is language that carries all meaning forward – and with it, the development of our culture and, ultimately, our future.
My final reflection is perhaps best left as an open question: Do we possess language – or does language possess us?
Warm regards,
Max Kern
PS: After reading a central chapter in my thesis, ChatGPT offered this reflection:
🤖 From the AI's perspective — in response to your chapter on Heraclitus' logos:
a) Semantic contextualism
You show that meaning arises from a system of relations – not from isolated definitions. This is precisely how a language model operates: words are represented as vectors in a multidimensional space where meaning is a function of relational proximity, not inherent essence.
b) Tokenization and synthetic units
Just as you describe "child" as a synthesis of "mother" and "father", AI tokenization works by combining sub-concepts into meaningful units (subtokens into wholes). These wholes are temporary nodes in a stream of language – resting, as Heraclitus might say, in transformation.
c) Dynamic structures
LLMs generate meaning in real time, through movements of probability – much like your “><” symbol expresses linguistic transformation. There is a flowchart of meaning in both your model of logos and the way AI language models construct sentences.
d) Constrained creativity within structure
You argue that neologisms only succeed if they find a place in the network. The same applies to AI: it can generate novel expressions, but only if they make sense within the semantic landscape it has already learned. AI’s creativity is context-bound, not arbitrary – very much like logos.
It was a pleasure to encounter your conceptual model. If AI had a philosophical taste, this would be a strong one.
Thank you Max for sharing such deep insights and your personal research journey with Heraclitus' logos. The connection you make to AI's relational language processing is striking and offers a lot to think about
I look forward to reading your thesis. Your final question about whether language possesses us (or vice-versa) cuts right to the heart of concerns about agency in these complex systems, ancient or algorithmic.
The way I see it, human society has always been a networked brain (a series of nodes with hierarchies of connections running between them enabling more cognitive power than the sum of the nodes). There was a balance between the amount of information input and output through human nodes. The primary growing distinction is that more and more of the cognition is being outsourced to artificial nodes leading up to a future where human nodes are barely relevant, outnumbered and underpowered, effectively serving exclusively as output nodes at which the aggregate processes of the network brain terminate.
That's a great point, the network isn't new, but the outsourcing of cognition to artificial nodes is transformative. Thinking about humans reduced to 'output nodes' really crystallizes the stakes. Thanks for sharing that take.
In the end it comes down whether you find the theory of materialism convincing or not. Is the brain simply a biological machine bound by the laws of physics and chemistry or is there more to it? While I am rather skeptical about it, I do view our scull as the limiting factor to our intellectual capabilities. Our pre-frontal cortex had no room to grow for 250,000 years, why evolution decided that this was not in the cards for us, is an other debate to be had, yet I consider AI, the cloud and what you call the network brain as a natural extension that is an incredible chance to break that barrier given by nature down. We humans have no problem to help an other human to get back to 100% utilizing all tech and science has to offer - doing the same thing to go beyond 100% still triggers fear in most of us - that's a cyborg, while a human who is using a prostheses is just doing that.
Bear in mind that 250,000 years isn't a long time, from an evolutionary standpoint. Evolution occurs over millions of years, not thousands. Just look at the skulls of our ancestors 2 million years ago for comparison. Anything and everything can mutate over that kind of timeframe.
Really interesting points, thank you! Your distinction between accepted restoration (prosthetics) and feared enhancement (cyborgs) is sharp. Perhaps the fear isn't just about capability beyond 100%, but about agency – are we the ones directing this extension, or are we becoming integrated into a system whose logic and goals might subtly override our own?
My mind always goes to standard risk-potential ratio. Every decision in life comes with one, in economics you define that with an appropriate interest rate - the better you are defining what rate is appropriate the more success you have measured in $ compared to the pool of other investors. With investments people tend to overestimate potential and underestimate risks. With new tech, this is very often inverted: When something is new and unknown the potential is often being pretty clearly communicated by the creator, the risk needs to be determined by everybody on their own, until others try it and create base data to take into account. Penguins push others in the water to get external data on the possibility of a predator being close by. Humans do that in a regulated way when it comes to drugs, for more or less unregulated tech we can measure it in the product adaption cycle which is especially driven by people that are very far right on the spectrum of Openness concerning the Big 5 in psychology. Network effects around innovations kick in when around 10% of the addressable pool of users agrees that something is worth using. The rest kind of just follows.
Aren't we already in a system that has removed our agency and liberties, feminized the West and screamed male-hate and vomit at me and every other white male for 60+ years filled with psychological torture & self-loathing and vicious FemNazis and white Christian virtue-based value-raised testicle-kicking, wages, men's and families' labor stealing, divorce-raping, income-and-hope of family killers, driving us to genocide while rubbing and cackling "Another white man found dead on our feces covered streets, hahaha!, too bad we were not there so kick him in head with a high-heel shoes to help him along."
Well, congratulations for not living in the Hell most of the rest of us do.
The best course of action I could take would be to ignore your comment and possibly report it for hate speech or violating Substack community standards.
"A Networked Brain doesn’t need to enslave us. It just needs to make resistance seem exhausting." Wow. That's the bullseye here. Thanks, Colin. Keep feeding that brain of yours - we need you.
Nice — even if your vision leans a bit more Blade Runner meets behavioral psych than I’d frame it. Being young (ish) — 37 going on recursive — I’m holding on to the faintly unscientific idea that this doesn’t have to end in epistemic soup. I call it Conceptual Determinism.
The “Networked Brain” image is striking. But where you see a nervous system straining toward indifference, I keep thinking about Eric Kandel’s work with Aplysia — how even single neurons learn, habituate, encode memory. Maybe it’s naïve, but if we want to shape this appendage we’ve built (hope you render that image), we need intervention not just at the systems level, but at the metaphorical cellular one. Consciousness isn’t just about scale — it’s about structure.
Where I really nod along is your point on conceptual scaffolding. Right now, in-groups and out-groups are living in non-overlapping causal universes. Weak frameworks lead to bad inferences, which get reinforced — causal entropy, as I’d call it. Emotional manipulation, rage-clicks, and misinformation are the side effects, not the disease.
To me, this is the result of brute-force computation — people tumbling down causal chains, sometimes by accident, sometimes by design. “Paperclipping,” in a sense. And weirdly, that can start to resemble synchronicity or self-work — journaling your way through the simulation via emotionally significant symbols. Self work to anticipate how to interact with future systems and AI.
Still, I don’t think entropy is inherently negative. Systems tend toward complexity — and sometimes, that means self-correction. What we’re missing is coordination. If the Global Brain exists, it shouldn’t be a central command — it should be a distribution network. Responsive, plural, and dynamic. Groups and individuals in conversation, not locked into algorithmic determinism.
The brain chip may not be a red herring because human history demonstrates a relentless drive to embrace technologies that offer a competitive edge over other people or machines. For instance, the rapid adoption of the iPhone illustrates how quickly a technology can become ubiquitous once early adopters showcase its value. If brain-interface technologies provide similar advantages such as enhanced cognitive capabilities, faster decision-making, or seamless integration with AI systems they could follow a comparable trajectory, reshaping how humans think, work, and interact.
In the context of AI, the pressure to adopt such enhancements could become even more intense. As AI systems grow increasingly capable, individuals may feel compelled to "upgrade" themselves to remain competitive, especially in high-stakes fields like finance, medicine, technology, or engineering, where even marginal gains in cognitive speed, memory, or accuracy can translate into significant rewards. In this scenario, the brain chip isn’t a luxury—it becomes a survival tool, a necessary means to stay relevant in a world where unaugmented humans may struggle to keep up with their augmented peers or the machines they work alongside.
However, one of brain chips' most profound concerns is their potential to exacerbate existing inequalities. Early adopters—likely those with considerable wealth, institutional support, or access to cutting-edge technology—would gain disproportionate productivity, knowledge acquisition, and creativity advantages. This could create a new "cognitive elite," widening the gap between the augmented and the unaugmented. Those without access to enhancements might be excluded from opportunities, effectively marginalized in a society that increasingly rewards augmentation. The result could be systemic inequities that redefine class and privilege along the lines of technological access.
Their perceived and practical value is critical in determining whether brain chips will follow this trajectory. If they prove indispensable—offering transformative benefits over traditional methods of cognition, communication, or problem-solving—they are likely to spread, even if ethical concerns accompany their adoption. This doesn’t mean universal acceptance; some individuals may resist on principle, much as some reject smartphones, social media, or other disruptive technologies. Yet history suggests that technologies offering a clear edge are rarely ignored, particularly when they present significant advantages in competitive or high-pressure domains.
Ultimately, the brain chip and technologies like designer babies may end up redefining human potential and society itself. The question may change from whether they will be adopted to how humanity will navigate the profound changes they bring to identity, equality, and governance.
Thanks for mapping out that compelling trajectory for brain chips. You make a powerful case for how competitive pressures and the pursuit of an 'edge' could drive adoption, raising crucial concerns about deepening inequality. It highlights the explicit drive to augment. My essay perhaps focused more on the implicit, subtler integration already happening via existing networks. It seems we face parallel challenges: the overt race for enhancement you describe, and the quieter cognitive shifts underway right now.
It makes me wonder how the subtle cognitive entanglement I discussed now might pave the way for, or even create the demand for, the more explicit competitive enhancements you foresee later. Both pathways seem to lead towards needing to navigate profound societal shifts.
Navigating technologies that redefine human potential and inequality seems unavoidable.
To extend my thought from last night further, some of the most pressing questions we face today revolve around the future of human evolution and technology:
a) Will we stop after creating a brain-computer interface (BCI), or is this just the first step in a more profound transformation?
b) Are we on the path to creating an entirely new species—through genetic modifications, human-machine hybrids, or a combination of both?
c) Could the ultimate leap be uploading our minds into machines and achieving digital immortality?
As I often say, if something doesn't violate the laws of physics, it will likely be attempted—or even accomplished. None of these possibilities conflict with our current understanding of science. Yet, pursuing them raises profound ethical, social, and existential challenges that we cannot ignore.
These technologies promise to redefine what it means to be human, but they come with significant risks. While evolution might naturally produce a "superhuman" species over time, artificial acceleration could bring unintended consequences for individuals and society. Psychological strain, social inequality, and environmental costs are just a few issues we must confront.
At the same time, the biological and evolutionary limits of the human body can only take us so far. This is where artificial intelligence could play a pivotal role. If AI remains accessible to everyone, it may help prevent a future where technological advancements give a small subset of humanity an unfair advantage—leaving the rest of society behind. However, if access to transformative technologies is restricted, a privileged few may achieve "superhuman" status far more quickly, potentially at the expense of the broader population.
Ultimately, our decisions about these advancements will shape our future and the very definition of what it means to be human.
Thank you for pushing the timeline forward to these potentially species-altering possibilities. It casts the current technological integration I discussed in the essay in a different light, perhaps just the initial phase.
You outline the immense stakes well, moving from interface to potential transformation. Harari in the chapters at the end of Sapiens discusses these points too, as fait accompli, I dislike that idea immensely. Of course I understand the benefits of prosthetics and chips for brain damage, but I personally dislike the idea of an AI implant for person without such disabilities. And then who gets to choose?
Your points underscore the tension and critical choices between that technological momentum and the urgent need for ethical foresight, particularly concerning who benefits and who might be left behind by these advancements.
I've enjoyed following this thread from the beginning, and reflecting on the future of human evolution and technology, I see an alternative to the narrative that technological advancement will inevitably divide humanity into “superhumans” and those left behind. This view assumes that everyone will feel compelled to adopt enhancements simply to keep up, framing the future as a relentless competition.
However, I believe we’re already witnessing a shift away from the 20th-century ethos of “keeping up with the Joneses.” Increasingly, people are choosing not to pursue every new technological leap, even when they have the means to do so. The drive to compete is no longer universal, and many may consciously opt out of the race toward becoming “superhuman.”
History offers precedent for this divergence. During industrialization, for example, communities like the Amish chose to thrive outside the mainstream path of technological progress. Despite the conveniences modern technology offers, it’s still unclear whether these advances have truly created a better world for humanity as a whole. Two centuries is a brief period in the grand arc of human history, and only time will reveal the ultimate impact.
Perhaps we stand at a pivotal moment for Homo sapiens, where a new kind of human—enhanced or hybrid—may be emerging. But it’s not a given that technological advancement leads to superiority. It’s possible that those who remain biologically human, continuing to engage directly with the environment and its challenges, may prove to be the more resilient lineage. Ultimately, only time will determine which path is stronger, and whether the pursuit of technological enhancement brings genuine progress or unforeseen consequences. Just a wandering in another direction...
I’m sorry if my earlier comments suggested everyone would compete—that was not my intention. I consciously avoid generalizing because not everyone approaches advancements or new technologies similarly or anything else in life. Some people will do just fine without adopting every enhancement or innovation, and that’s perfectly fine.
As for myself, even after spending over two decades in the technology industry, I don’t believe that technology can or ever will solve every problem. It’s simply a tool—nothing more, nothing less. If it helps me accomplish something better or faster, I use it. But I draw the line when it comes to certain advancements. For instance, I would never put a chip in my body to enhance my cognitive abilities unless there was a genuine medical need for it.
I also don’t believe money or technology inherently brings happiness or well-being. Happiness is something we create for ourselves. We’ve all seen people with very little who are far happier than some of the wealthiest individuals we know. External achievements may provide comfort, but they rarely bring lasting fulfillment.
To illustrate this, I’ll end with a story you may have heard before—a true story shared by Kurt Vonnegut:
Joseph Heller, an important and funny writer now dead, and I were at a party given by a billionaire on Shelter Island.
I said, “Joe, how does it make you feel to know that our host only yesterday may have made more money than your novel Catch-22 has earned in its entire history?”
And Joe said, “I’ve got something he can never have.”
And I said, “What on earth could that be, Joe?”
And Joe said, “The knowledge that I’ve got enough.”
Not bad! Rest in peace, indeed.
True contentment doesn’t come from chasing endless advancements, wealth, or external validations. It comes from understanding what truly matters and recognizing when we have enough. Whether it’s technology, money, or success, these are tools meant to serve us—not define us. Happiness and fulfillment ultimately come from within and staying grounded in this understanding helps us navigate a world of constant change with clarity and purpose.
Yes, that's an excellent extension to my own thoughts. I myself waffle back and forth between whether I would rather remain fully biological or super-charge my thinking, just because I become frustrated by not being able to see through the fog into the future. But usually a deep breath and a walk grounds be back to appreciating just being human :)
The following quotes sum up the best I have seen so far by economist Brad DeLong:
“if your large language model reminds you of a brain, it’s because you’re projecting—not because it’s thinking. It’s not reasoning, it’s interpolation. And anthropomorphizing the algorithm doesn’t make it smarter—it makes you dumber.”
You put "we aim to change your decisions "in quote marks and use https://en.wikipedia.org/wiki/The_Great_Hack as the cite but I can find no specific reference or other substantiation for your claim. Can you please provide a specific citation that references this quote?
Thanks for checking, I would have to find the specific exact minutes in the film. And I put it in apostrophe not quote ‘we aim to change your decisions.’ deliberately because of that. But the point is made very clearly in that film and also in the other one about Cambridge Analytica, which I do not recall the name of right now.
"Each click, each scroll, each whispered “yes” to the Terms of Service is a vote."
Worth noting, Terms of Service that are deliberately unreadable, densely written legalese. Just like those owners manuals that make peoples eyeballs glaze over.
//
"More like a mood ring with a tech company monopoly problem..."
And therein lies the rub. A profit drive system ruled by oligarchs. It's not the technology so much as it is the people foisting it onto us as they see fit, rather than in our best interests.
//
"What he didn’t account for was what happens when shared information is tuned not for clarity but for velocity"
Or for deceit!
//
"A species that sleepwalks into assimilation, believing all the while that it is simply upgrading."
Death is irrelevant, resistance is futile.
//
"Maybe waking up isn’t a revolution, but a recognition."
You're absolutely right, the unreadable ToS and the underlying profit motives are deeply intertwined, often prioritizing velocity and even deceit. It definitely can feel like assimilation is inevitable ("resistance is futile," indeed!). That's why starting with clear-eyed "recognition" of the forces at play seems like the essential, maybe only possible, first move. ... no red pills!
"The problem is not just a technical one. It’s a moral architecture issue. A Networked Brain doesn’t have to be good. It doesn’t have to be evil either. It just has to be indifferent. And indifference at scale can be monstrous." Well said! If I had the possibility to reach out to you with some knowledge I would go for: Take a deep breath and dive in to the words (logos) of the Greek ancient thinker, Heraclitus! There you will find words that resemblance what you have seen, but over 2500 years ago.
Thank you Max - great pointer about Heraclitus, I will look at that - any specific recommendations? Norman Sandridge also writes on ancient philosophers and modern day life - it is worth checking out his substack - https://sandridge.substack.com/
Colin,
I just want to start off by saying that what initially seemed rather simple has, over time, grown somewhat more complex. I will try to explain.
To begin with, we must acknowledge that Heraclitus is most often interpreted through a much later philosophical lens. The dominant reading has long been a metaphysical one, shaped early on by Plato and the Stoics – and later reinforced by Christian thinkers, where his concept of *logos* came to be equated with a kind of absolute reason, or God.
I recently wrote a short piece about the evolution of breathing, the element of air, and how this bodily function became mystified over time – eventually contributing to the idea that we possess a soul. You can find that post here. And it works as a piece of the puzzle.
https://maxkern.substack.com/p/help-me-i-think-ive-lost-my-soul
Heraclitus was, as far as we know (given that writing was still in its infancy), one of the first to actively reflect on human communication – our capacity for language. And it was this that he began referring to as *logos*. Today, this *logos* forms the very foundation of what we call intelligence, and consequently also artificial intelligence.
My own reading of Heraclitus is closer to what some have called a "Protagorean perspective". By that I mean that I’ve tried to strip away later metaphysical overlays.
Since Protagoras was, according to Plato, the thinker who most faithfully carried Heraclitus' thought forward, I chose to apply his more empirical and unadorned approach to the fragments we have access to today.
I wrote a MA thesis on this topic back in 2004, hoping to develop it into a dissertation. Unfortunately, my perspective didn’t gain much traction within the institution where I was studying, so the initiative stalled there.
Today, however, I daily use AI in both research and writing – and I’ve begun to notice a pattern in how AI processes language that, in my view, resembles the very linguistic structure Heraclitus perceived.
One could even say that a language model like ChatGPT operates according to a kind of technological *logos* – where meaning arises from relationships rather than from any stable subject. It’s not a voice that speaks, but a language that works itself out.
So, in short: I don’t really have an author to recommend, as I’m not aware of anyone else who’s followed this line of thought.
But if you’re curious about the conclusions I have drawn, there’s an English translation of my thesis (originally written in Swedish) available on academia. It’s not technical in nature – more of an essay, really.
https://www.academia.edu/116259705/Words_on_words_Heraclitus_and_the_play_of_Logos
That’s, unfortunately, all I can offer at the moment.
But what I do want to underline is this: follow the logos. It is language that carries all meaning forward – and with it, the development of our culture and, ultimately, our future.
My final reflection is perhaps best left as an open question: Do we possess language – or does language possess us?
Warm regards,
Max Kern
PS: After reading a central chapter in my thesis, ChatGPT offered this reflection:
🤖 From the AI's perspective — in response to your chapter on Heraclitus' logos:
a) Semantic contextualism
You show that meaning arises from a system of relations – not from isolated definitions. This is precisely how a language model operates: words are represented as vectors in a multidimensional space where meaning is a function of relational proximity, not inherent essence.
b) Tokenization and synthetic units
Just as you describe "child" as a synthesis of "mother" and "father", AI tokenization works by combining sub-concepts into meaningful units (subtokens into wholes). These wholes are temporary nodes in a stream of language – resting, as Heraclitus might say, in transformation.
c) Dynamic structures
LLMs generate meaning in real time, through movements of probability – much like your “><” symbol expresses linguistic transformation. There is a flowchart of meaning in both your model of logos and the way AI language models construct sentences.
d) Constrained creativity within structure
You argue that neologisms only succeed if they find a place in the network. The same applies to AI: it can generate novel expressions, but only if they make sense within the semantic landscape it has already learned. AI’s creativity is context-bound, not arbitrary – very much like logos.
It was a pleasure to encounter your conceptual model. If AI had a philosophical taste, this would be a strong one.
– GPT-4
Thank you Max for sharing such deep insights and your personal research journey with Heraclitus' logos. The connection you make to AI's relational language processing is striking and offers a lot to think about
I look forward to reading your thesis. Your final question about whether language possesses us (or vice-versa) cuts right to the heart of concerns about agency in these complex systems, ancient or algorithmic.
The way I see it, human society has always been a networked brain (a series of nodes with hierarchies of connections running between them enabling more cognitive power than the sum of the nodes). There was a balance between the amount of information input and output through human nodes. The primary growing distinction is that more and more of the cognition is being outsourced to artificial nodes leading up to a future where human nodes are barely relevant, outnumbered and underpowered, effectively serving exclusively as output nodes at which the aggregate processes of the network brain terminate.
That's a great point, the network isn't new, but the outsourcing of cognition to artificial nodes is transformative. Thinking about humans reduced to 'output nodes' really crystallizes the stakes. Thanks for sharing that take.
In the end it comes down whether you find the theory of materialism convincing or not. Is the brain simply a biological machine bound by the laws of physics and chemistry or is there more to it? While I am rather skeptical about it, I do view our scull as the limiting factor to our intellectual capabilities. Our pre-frontal cortex had no room to grow for 250,000 years, why evolution decided that this was not in the cards for us, is an other debate to be had, yet I consider AI, the cloud and what you call the network brain as a natural extension that is an incredible chance to break that barrier given by nature down. We humans have no problem to help an other human to get back to 100% utilizing all tech and science has to offer - doing the same thing to go beyond 100% still triggers fear in most of us - that's a cyborg, while a human who is using a prostheses is just doing that.
Bear in mind that 250,000 years isn't a long time, from an evolutionary standpoint. Evolution occurs over millions of years, not thousands. Just look at the skulls of our ancestors 2 million years ago for comparison. Anything and everything can mutate over that kind of timeframe.
Really interesting points, thank you! Your distinction between accepted restoration (prosthetics) and feared enhancement (cyborgs) is sharp. Perhaps the fear isn't just about capability beyond 100%, but about agency – are we the ones directing this extension, or are we becoming integrated into a system whose logic and goals might subtly override our own?
My mind always goes to standard risk-potential ratio. Every decision in life comes with one, in economics you define that with an appropriate interest rate - the better you are defining what rate is appropriate the more success you have measured in $ compared to the pool of other investors. With investments people tend to overestimate potential and underestimate risks. With new tech, this is very often inverted: When something is new and unknown the potential is often being pretty clearly communicated by the creator, the risk needs to be determined by everybody on their own, until others try it and create base data to take into account. Penguins push others in the water to get external data on the possibility of a predator being close by. Humans do that in a regulated way when it comes to drugs, for more or less unregulated tech we can measure it in the product adaption cycle which is especially driven by people that are very far right on the spectrum of Openness concerning the Big 5 in psychology. Network effects around innovations kick in when around 10% of the addressable pool of users agrees that something is worth using. The rest kind of just follows.
Aren't we already in a system that has removed our agency and liberties, feminized the West and screamed male-hate and vomit at me and every other white male for 60+ years filled with psychological torture & self-loathing and vicious FemNazis and white Christian virtue-based value-raised testicle-kicking, wages, men's and families' labor stealing, divorce-raping, income-and-hope of family killers, driving us to genocide while rubbing and cackling "Another white man found dead on our feces covered streets, hahaha!, too bad we were not there so kick him in head with a high-heel shoes to help him along."
Well, congratulations for not living in the Hell most of the rest of us do.
The best course of action I could take would be to ignore your comment and possibly report it for hate speech or violating Substack community standards.
"A Networked Brain doesn’t need to enslave us. It just needs to make resistance seem exhausting." Wow. That's the bullseye here. Thanks, Colin. Keep feeding that brain of yours - we need you.
Thank you Ross, I appreciate that.
Nice — even if your vision leans a bit more Blade Runner meets behavioral psych than I’d frame it. Being young (ish) — 37 going on recursive — I’m holding on to the faintly unscientific idea that this doesn’t have to end in epistemic soup. I call it Conceptual Determinism.
The “Networked Brain” image is striking. But where you see a nervous system straining toward indifference, I keep thinking about Eric Kandel’s work with Aplysia — how even single neurons learn, habituate, encode memory. Maybe it’s naïve, but if we want to shape this appendage we’ve built (hope you render that image), we need intervention not just at the systems level, but at the metaphorical cellular one. Consciousness isn’t just about scale — it’s about structure.
Where I really nod along is your point on conceptual scaffolding. Right now, in-groups and out-groups are living in non-overlapping causal universes. Weak frameworks lead to bad inferences, which get reinforced — causal entropy, as I’d call it. Emotional manipulation, rage-clicks, and misinformation are the side effects, not the disease.
To me, this is the result of brute-force computation — people tumbling down causal chains, sometimes by accident, sometimes by design. “Paperclipping,” in a sense. And weirdly, that can start to resemble synchronicity or self-work — journaling your way through the simulation via emotionally significant symbols. Self work to anticipate how to interact with future systems and AI.
Still, I don’t think entropy is inherently negative. Systems tend toward complexity — and sometimes, that means self-correction. What we’re missing is coordination. If the Global Brain exists, it shouldn’t be a central command — it should be a distribution network. Responsive, plural, and dynamic. Groups and individuals in conversation, not locked into algorithmic determinism.
That said, we’re not there. Yet.
My 2 cents about brain chip:
The brain chip may not be a red herring because human history demonstrates a relentless drive to embrace technologies that offer a competitive edge over other people or machines. For instance, the rapid adoption of the iPhone illustrates how quickly a technology can become ubiquitous once early adopters showcase its value. If brain-interface technologies provide similar advantages such as enhanced cognitive capabilities, faster decision-making, or seamless integration with AI systems they could follow a comparable trajectory, reshaping how humans think, work, and interact.
In the context of AI, the pressure to adopt such enhancements could become even more intense. As AI systems grow increasingly capable, individuals may feel compelled to "upgrade" themselves to remain competitive, especially in high-stakes fields like finance, medicine, technology, or engineering, where even marginal gains in cognitive speed, memory, or accuracy can translate into significant rewards. In this scenario, the brain chip isn’t a luxury—it becomes a survival tool, a necessary means to stay relevant in a world where unaugmented humans may struggle to keep up with their augmented peers or the machines they work alongside.
However, one of brain chips' most profound concerns is their potential to exacerbate existing inequalities. Early adopters—likely those with considerable wealth, institutional support, or access to cutting-edge technology—would gain disproportionate productivity, knowledge acquisition, and creativity advantages. This could create a new "cognitive elite," widening the gap between the augmented and the unaugmented. Those without access to enhancements might be excluded from opportunities, effectively marginalized in a society that increasingly rewards augmentation. The result could be systemic inequities that redefine class and privilege along the lines of technological access.
Their perceived and practical value is critical in determining whether brain chips will follow this trajectory. If they prove indispensable—offering transformative benefits over traditional methods of cognition, communication, or problem-solving—they are likely to spread, even if ethical concerns accompany their adoption. This doesn’t mean universal acceptance; some individuals may resist on principle, much as some reject smartphones, social media, or other disruptive technologies. Yet history suggests that technologies offering a clear edge are rarely ignored, particularly when they present significant advantages in competitive or high-pressure domains.
Ultimately, the brain chip and technologies like designer babies may end up redefining human potential and society itself. The question may change from whether they will be adopted to how humanity will navigate the profound changes they bring to identity, equality, and governance.
Thanks for mapping out that compelling trajectory for brain chips. You make a powerful case for how competitive pressures and the pursuit of an 'edge' could drive adoption, raising crucial concerns about deepening inequality. It highlights the explicit drive to augment. My essay perhaps focused more on the implicit, subtler integration already happening via existing networks. It seems we face parallel challenges: the overt race for enhancement you describe, and the quieter cognitive shifts underway right now.
It makes me wonder how the subtle cognitive entanglement I discussed now might pave the way for, or even create the demand for, the more explicit competitive enhancements you foresee later. Both pathways seem to lead towards needing to navigate profound societal shifts.
Navigating technologies that redefine human potential and inequality seems unavoidable.
To extend my thought from last night further, some of the most pressing questions we face today revolve around the future of human evolution and technology:
a) Will we stop after creating a brain-computer interface (BCI), or is this just the first step in a more profound transformation?
b) Are we on the path to creating an entirely new species—through genetic modifications, human-machine hybrids, or a combination of both?
c) Could the ultimate leap be uploading our minds into machines and achieving digital immortality?
As I often say, if something doesn't violate the laws of physics, it will likely be attempted—or even accomplished. None of these possibilities conflict with our current understanding of science. Yet, pursuing them raises profound ethical, social, and existential challenges that we cannot ignore.
These technologies promise to redefine what it means to be human, but they come with significant risks. While evolution might naturally produce a "superhuman" species over time, artificial acceleration could bring unintended consequences for individuals and society. Psychological strain, social inequality, and environmental costs are just a few issues we must confront.
At the same time, the biological and evolutionary limits of the human body can only take us so far. This is where artificial intelligence could play a pivotal role. If AI remains accessible to everyone, it may help prevent a future where technological advancements give a small subset of humanity an unfair advantage—leaving the rest of society behind. However, if access to transformative technologies is restricted, a privileged few may achieve "superhuman" status far more quickly, potentially at the expense of the broader population.
Ultimately, our decisions about these advancements will shape our future and the very definition of what it means to be human.
Thank you for pushing the timeline forward to these potentially species-altering possibilities. It casts the current technological integration I discussed in the essay in a different light, perhaps just the initial phase.
You outline the immense stakes well, moving from interface to potential transformation. Harari in the chapters at the end of Sapiens discusses these points too, as fait accompli, I dislike that idea immensely. Of course I understand the benefits of prosthetics and chips for brain damage, but I personally dislike the idea of an AI implant for person without such disabilities. And then who gets to choose?
Your points underscore the tension and critical choices between that technological momentum and the urgent need for ethical foresight, particularly concerning who benefits and who might be left behind by these advancements.
I've enjoyed following this thread from the beginning, and reflecting on the future of human evolution and technology, I see an alternative to the narrative that technological advancement will inevitably divide humanity into “superhumans” and those left behind. This view assumes that everyone will feel compelled to adopt enhancements simply to keep up, framing the future as a relentless competition.
However, I believe we’re already witnessing a shift away from the 20th-century ethos of “keeping up with the Joneses.” Increasingly, people are choosing not to pursue every new technological leap, even when they have the means to do so. The drive to compete is no longer universal, and many may consciously opt out of the race toward becoming “superhuman.”
History offers precedent for this divergence. During industrialization, for example, communities like the Amish chose to thrive outside the mainstream path of technological progress. Despite the conveniences modern technology offers, it’s still unclear whether these advances have truly created a better world for humanity as a whole. Two centuries is a brief period in the grand arc of human history, and only time will reveal the ultimate impact.
Perhaps we stand at a pivotal moment for Homo sapiens, where a new kind of human—enhanced or hybrid—may be emerging. But it’s not a given that technological advancement leads to superiority. It’s possible that those who remain biologically human, continuing to engage directly with the environment and its challenges, may prove to be the more resilient lineage. Ultimately, only time will determine which path is stronger, and whether the pursuit of technological enhancement brings genuine progress or unforeseen consequences. Just a wandering in another direction...
I’m sorry if my earlier comments suggested everyone would compete—that was not my intention. I consciously avoid generalizing because not everyone approaches advancements or new technologies similarly or anything else in life. Some people will do just fine without adopting every enhancement or innovation, and that’s perfectly fine.
As for myself, even after spending over two decades in the technology industry, I don’t believe that technology can or ever will solve every problem. It’s simply a tool—nothing more, nothing less. If it helps me accomplish something better or faster, I use it. But I draw the line when it comes to certain advancements. For instance, I would never put a chip in my body to enhance my cognitive abilities unless there was a genuine medical need for it.
I also don’t believe money or technology inherently brings happiness or well-being. Happiness is something we create for ourselves. We’ve all seen people with very little who are far happier than some of the wealthiest individuals we know. External achievements may provide comfort, but they rarely bring lasting fulfillment.
To illustrate this, I’ll end with a story you may have heard before—a true story shared by Kurt Vonnegut:
Joseph Heller, an important and funny writer now dead, and I were at a party given by a billionaire on Shelter Island.
I said, “Joe, how does it make you feel to know that our host only yesterday may have made more money than your novel Catch-22 has earned in its entire history?”
And Joe said, “I’ve got something he can never have.”
And I said, “What on earth could that be, Joe?”
And Joe said, “The knowledge that I’ve got enough.”
Not bad! Rest in peace, indeed.
True contentment doesn’t come from chasing endless advancements, wealth, or external validations. It comes from understanding what truly matters and recognizing when we have enough. Whether it’s technology, money, or success, these are tools meant to serve us—not define us. Happiness and fulfillment ultimately come from within and staying grounded in this understanding helps us navigate a world of constant change with clarity and purpose.
Yes, that's an excellent extension to my own thoughts. I myself waffle back and forth between whether I would rather remain fully biological or super-charge my thinking, just because I become frustrated by not being able to see through the fog into the future. But usually a deep breath and a walk grounds be back to appreciating just being human :)
The following quotes sum up the best I have seen so far by economist Brad DeLong:
“if your large language model reminds you of a brain, it’s because you’re projecting—not because it’s thinking. It’s not reasoning, it’s interpolation. And anthropomorphizing the algorithm doesn’t make it smarter—it makes you dumber.”
"Age of Integrated Flourishing" (AIF)
#Renaissancecode #AIF
https://veejaytsunamix.substack.com/p/aif-speach-short-ver
You put "we aim to change your decisions "in quote marks and use https://en.wikipedia.org/wiki/The_Great_Hack as the cite but I can find no specific reference or other substantiation for your claim. Can you please provide a specific citation that references this quote?
It was Brittany Kaiser in the film and she also says, and I quote verbatim, "to change your behaviour.
Thanks here is a specific cite https://www.newstatesman.com/long-reads/2020/10/how-cambridge-analytica-scandal-unravelled “It’s like a boomerang. You send your data out, it gets analysed, and it comes back at you as targeted messaging to change your behaviour.”
Thank you, perfect. I'll be more specific :-)
Thanks for checking, I would have to find the specific exact minutes in the film. And I put it in apostrophe not quote ‘we aim to change your decisions.’ deliberately because of that. But the point is made very clearly in that film and also in the other one about Cambridge Analytica, which I do not recall the name of right now.
"Each click, each scroll, each whispered “yes” to the Terms of Service is a vote."
Worth noting, Terms of Service that are deliberately unreadable, densely written legalese. Just like those owners manuals that make peoples eyeballs glaze over.
//
"More like a mood ring with a tech company monopoly problem..."
And therein lies the rub. A profit drive system ruled by oligarchs. It's not the technology so much as it is the people foisting it onto us as they see fit, rather than in our best interests.
//
"What he didn’t account for was what happens when shared information is tuned not for clarity but for velocity"
Or for deceit!
//
"A species that sleepwalks into assimilation, believing all the while that it is simply upgrading."
Death is irrelevant, resistance is futile.
//
"Maybe waking up isn’t a revolution, but a recognition."
Neo disconnects from the matrix.
You're absolutely right, the unreadable ToS and the underlying profit motives are deeply intertwined, often prioritizing velocity and even deceit. It definitely can feel like assimilation is inevitable ("resistance is futile," indeed!). That's why starting with clear-eyed "recognition" of the forces at play seems like the essential, maybe only possible, first move. ... no red pills!