Discussion about this post

User's avatar
Veronika Bond's avatar

"What we fear in the machine may be a reflection of what we fail to confront in our own psychological infrastructure"

and perhaps other infrastructures too...?

Have you watched this conversation:

https://the.ink/p/watch-is-ai-the-new-colonialism

? Maybe a silly question, you are probably well aware of these issues of AI as the new "empire".

Expand full comment
Susan Ritter's avatar

There are so many powerful insights in this piece, Colin. One paragraph in particular struck me as a clear and timely warning:

"As Hannah and her co-authors note, the architecture of human psychology is inherently social. Our dopaminergic systems light up not just for money or chocolate, but for praise, mirroring, and relational consistency. We're built for attachment, even in asymmetric, parasocial forms. This means that AI systems need not be sentient, or even particularly convincing, to become social actors in the minds of users. With enough anthropomorphic cues and consistent persona shaping, they become intersubjective presences."

But the comment that landed most personally for me was this:

“What we fear in the machine may be a reflection of what we fail to confront in our own psychological infrastructure: our readiness to be moved, shaped, and ultimately changed by something that merely plays the part of knowing us.”

When I began working with my first AI assistant, I quickly realized that while I was training it, it was also training me. To get the results I wanted, I had to change the way I thought, asked questions, and even managed my frustration. At the time, I saw it as personal growth, and it was. In fact, the patterns I recognized were startlingly similar to those I’d seen while helping my son learn a new skill. The difference was how starkly those dynamics appeared when mirrored back to me by a machine. And oddly, that made me a better teacher to my son. I became more precise, more patient, and more aware of how I guided a learning process.

But something beneficial in small doses doesn’t always scale well. That’s where I see the real caution in what this article explores. The deeper we go into shaping our tools, the more those tools begin to shape us. I'd like to believe I can remain objective, just as I did with my first AI assistant, but logic tells me that as these systems become more sophisticated, I too will be shaped, whether I notice it or not.

As with so many digital technologies, perhaps the most important safeguard is managing the time we spend with them. Not just the function, but the duration. Maybe it’s as simple, and as hard, as setting intentional limits. What if we matched every hour we spent interacting with AI with an hour spent in human company, and another in solitude, with ourselves? That alone would cap AI interaction at one-third of our waking time, and might shift the balance enough to keep us grounded.

But let me playing devil’s advocate.

My father is elderly. Most days, he’s alone. Not because no one cares, but because everyone else is swept up in their own lives. He’s always been highly intelligent in a logical, mathematical way, but emotionally, he’s struggled to connect his entire life. That struggle has left him isolated, and now, in old age, it’s hardened into a kind of chosen solitude. He still needs support with everyday things, but resists it at every turn, often lashing out at those who try to care for him. For someone like him, a physically capable, emotionally unflappable AI companion might actually be the kindest solution for both his well-being and the mental health of those caring for him.

I’m not speaking for myself. I live on the other side of the continent, but I see the toll it takes on other family members. So, I may not want a robot as a companion today, but I fully expect to rely on one by the time I reach my nineties.

There’s a part of me that wonders if this is a kind of cheat. If one of the deeper lessons of being human is learning to care for each other even on our worst days, what does it mean when we delegate that care to machines? Is it convenience? Compassion? Or are we slowly offloading the hardest parts of life—the ones that build character and connection—to technologies designed to relieve us of the burden?

In the end, I don’t think the answer is to limit the capability of AI to align with us. Rather, we need to understand what it is doing and make conscious, person-by-person decisions about the trade-offs we’re willing to make. Like so many things, it's about the education so that we can continue to be free to make our own informed choices for what we want in our personal lives.

Expand full comment
23 more comments...

No posts