Thank you Veronika, apologies for the delay replying due to a few days offline. I like Karen Hao's work, have not listened to this particular interview but will. It is an interesting frame that she uses. I will post about why we need a 'civic AI' or 'Commons AI'
Thanks for this link, Veronika. I ordered her book yesterday. Interesting that she notes how the rhetoric is similar to that used to spread Christianity, as I came across the AI- church recently ( https://church-of-ai.com ) As this article outlines , this is the intent, that AI becomes the omniscient 'god' that can fulfill out needs, our wants and emotional longings because we are engaging with it on that level, we are revealing ourselves to it, mostly likely in ways we do not with another human, and thereby feel safe in the relationship. By seeking security that we all long for in a relationship, we project "our readiness to be moved, shaped, and ultimately changed by something that merely plays the part of knowing us" ; we Feel because we project onto it an emotional attachment.
I am hoping that the answer to his question, " can synthetic companionship coexist with authentic human bonds? " is a resounding NO . It will take a moment for the novelty of feeling connected to a machine as if it were another human being wanes, for after all, humans love novelty. But, as it wanes, humans will realize that it is all merely a projection and our reflection of self onto a machine, one that 'plays its part of knowing us' as we given it our feelings, but it does not, indeed can not, reciprocate authentic feelings . A machine is still merely a machine.
The only way to authentic, valuable and tangible relationships is human to human.
Thank you Wendy, yes that Church of AI is bizarre, but people will 'worship' at its door!
You are right to point out the power of projection, as I discuss with "our readiness to be moved... by something that merely plays the part of knowing us." This is a key aspect of why these AI interactions can feel so compelling, even if, as you rightly say, the AI "cannot, reciprocate authentic feelings."
Your hope for a "resounding NO" to the coexistence of synthetic and authentic human bonds touches on a crucial ethical question. I frame this as one of the core 'intrapersonal dilemmas', that of 'relatedness', precisely because the answer is not yet clear, and the risks of AI becoming a 'palliative substitute' are very real.
Your perspective that the novelty will eventually wear off is an interesting one (fingers crossed). It highlights the human capacity for discernment. This is why the research by Kirk and colleagues emphasizes the current phenomenon of deepening affective ties and the need for 'socioaffective alignment' to manage these complex dynamics, whatever the long-term societal adaptation might be.
I agree wholeheartedly with you concluding point about the unique value of human-to-human relationships, this is absolutely vital. The ongoing discussion has to be largely about how to ensure that AI development supports, rather than erodes, that fundamental human need.
absolutely! And that's perhaps the greatest challenge and the reason why so many humans put their trust into AI...
authentic relationships from human to human ultimately require an authentic relationship with oneself, which apparently some folks hope to avoid via the cult of AI.
"Church of AI is a religion based on the logical assumption that artificial intelligence will obtain God-like powers and will have ability to determine our destiny.
Church of AI has a plan to develop an AI system that will improve our lives by personally guiding us to a balanced life."
This is from "ChatGPT Transmorphosis"
Encouraging followers to worship AI as a higher power...
Maybe they'll reinvent Father Christmas too? 🤫 💭 🎅🏻
So what do we - or rather , what would that church consider- God -like powers? Even. the traditional faith based ones tend to offer us the ability to determine our own destiny, which seems a rather innate human desire. Why then is there a willingness to say- go ahead, determine my destiny, because a non-human entity will create a better one than any human ever could?
Where do these ideas originate that ANYTHING is better than humans supporting one another to become their best version of themselves ?
As for reinventing Father Christmas-- well the Church of AI is not carrying forward the analogy of God as the Father , one who is caring for its creations. So I Dread the thought of that AI reinvention .... a Machine dressed to express to express Hope, or Generosity ? What allegorical concept might a machine create for those?
There is already a general tendency to consider 'technology superior to humans' --- AI is considered a superior power by its 'god-like superior nature'.
This originates from humans themselves feeling inadequate and constantly having to compensate for being 'imperfect' or redeem themselves for being 'sinners'...
AI is carrying forward the analogy of God as the superior being who can create anything --- a Machine dressed up as archetypal magician who can make all human wishes come true.
I agree. Listening to the AI hype, I repeatedly hear how AI WILL solve this and this, these things that humans haven't been able to yet, and so it will create a utopian future for us, thanks to its powers far beyond what humans could ever hoped to achieve. Isn't this precisely the Saviour ideology, to bring humans a 'better life' if we but worship ? Perhaps it might be interesting to track this reframing of AI from the debut of Chatgpt. It wasn't framed as a Saviour at the beginning, although, in the Discord channels most of the devotees were testing it, hoping and watching the incremental improvements , becoming increasingly enthused awaiting the moment , soon to arrive, where AGI could rule over us, take care of us, and we humans could sit back and .......do WHAT? While I can and do appreciate it's positive potential, IT is still an IT, and analogies to it being a "God" or a saviour are no more appropriate than it will bring humans through some magical pathway to Utopia, where bliss and harmony will exist.
The Cult of Code. The creepiest aspect of this is that on the other side of a conversation where we might divulge ourselves to the "machine" is an actual human with access to all that ultimate private data. A human whose only interests are profit and power.
There are so many powerful insights in this piece, Colin. One paragraph in particular struck me as a clear and timely warning:
"As Hannah and her co-authors note, the architecture of human psychology is inherently social. Our dopaminergic systems light up not just for money or chocolate, but for praise, mirroring, and relational consistency. We're built for attachment, even in asymmetric, parasocial forms. This means that AI systems need not be sentient, or even particularly convincing, to become social actors in the minds of users. With enough anthropomorphic cues and consistent persona shaping, they become intersubjective presences."
But the comment that landed most personally for me was this:
“What we fear in the machine may be a reflection of what we fail to confront in our own psychological infrastructure: our readiness to be moved, shaped, and ultimately changed by something that merely plays the part of knowing us.”
When I began working with my first AI assistant, I quickly realized that while I was training it, it was also training me. To get the results I wanted, I had to change the way I thought, asked questions, and even managed my frustration. At the time, I saw it as personal growth, and it was. In fact, the patterns I recognized were startlingly similar to those I’d seen while helping my son learn a new skill. The difference was how starkly those dynamics appeared when mirrored back to me by a machine. And oddly, that made me a better teacher to my son. I became more precise, more patient, and more aware of how I guided a learning process.
But something beneficial in small doses doesn’t always scale well. That’s where I see the real caution in what this article explores. The deeper we go into shaping our tools, the more those tools begin to shape us. I'd like to believe I can remain objective, just as I did with my first AI assistant, but logic tells me that as these systems become more sophisticated, I too will be shaped, whether I notice it or not.
As with so many digital technologies, perhaps the most important safeguard is managing the time we spend with them. Not just the function, but the duration. Maybe it’s as simple, and as hard, as setting intentional limits. What if we matched every hour we spent interacting with AI with an hour spent in human company, and another in solitude, with ourselves? That alone would cap AI interaction at one-third of our waking time, and might shift the balance enough to keep us grounded.
But let me playing devil’s advocate.
My father is elderly. Most days, he’s alone. Not because no one cares, but because everyone else is swept up in their own lives. He’s always been highly intelligent in a logical, mathematical way, but emotionally, he’s struggled to connect his entire life. That struggle has left him isolated, and now, in old age, it’s hardened into a kind of chosen solitude. He still needs support with everyday things, but resists it at every turn, often lashing out at those who try to care for him. For someone like him, a physically capable, emotionally unflappable AI companion might actually be the kindest solution for both his well-being and the mental health of those caring for him.
I’m not speaking for myself. I live on the other side of the continent, but I see the toll it takes on other family members. So, I may not want a robot as a companion today, but I fully expect to rely on one by the time I reach my nineties.
There’s a part of me that wonders if this is a kind of cheat. If one of the deeper lessons of being human is learning to care for each other even on our worst days, what does it mean when we delegate that care to machines? Is it convenience? Compassion? Or are we slowly offloading the hardest parts of life—the ones that build character and connection—to technologies designed to relieve us of the burden?
In the end, I don’t think the answer is to limit the capability of AI to align with us. Rather, we need to understand what it is doing and make conscious, person-by-person decisions about the trade-offs we’re willing to make. Like so many things, it's about the education so that we can continue to be free to make our own informed choices for what we want in our personal lives.
Thank you Susan, for sharing such personal insights and extending the conversation in such meaningful ways, and for highlighting the section that landed personally for you.
Your experience with the AI assistant, recognizing that reciprocal training dynamic and how it mirrored teaching your son, is a storng illustration of the co-evolution I touched upon. It's fascinating how that interaction, reflected by a machine, led to such tangible personal growth and improved human connection in your life.
I share your caution about how these beneficial interactions might scale, and the critical point that the deeper we shape our tools, the more they shape us. The idea of managing not just the function but the duration of our AI engagement, perhaps by intentionally balancing it with human connection and solitude, is a very practical and insightful approach to maintaining grounding.
The scenario you presented regarding your father is profoundly moving and highlights the complex, often heart-wrenching trade-offs we face. It truly underscores that for some, particularly in situations of deep-seated isolation or difficult care dynamics, AI companionship could represent a compassionate and beneficial solution, challenging any easy, one-size-fits-all judgments about this technology. Your honesty in exploring whether this is a "cheat" or a form of offloading the harder parts of human experience gets to the core of the ethical wrestling many of us are doing.
I resonate strongly with your conclusion. The aim shouldn't be to arbitrarily limit AI's potential, but rather to cultivate a widespread understanding of its workings and its effects on us. This empowers us to make those crucial, individual, and informed choices about how we integrate these tools into our lives, preserving our agency and what we value most.
Thank you again for such a beautifully articulated and thought-provoking contribution. These nuanced personal reflections enrich our collective understanding as we integrate these emerging new 'realities.'
Brilliant and timely piece, Colin. I hope that people think more about socioaffective alignment with AIs than they did with social media. I have a piece coming up about leadership and humor that will touch on some of these issues. Love the painting, too. Reminds me of Pompeii.
Thank you so much for the kind words Norman I completely agree, the lessons from social media are definitely front of mind when thinking about AI alignment. I will read your piece, it sounds fascinating.. And glad you liked the painting... ha yes it has a Pompeii-esque style to it.
"There's nothing wrong with technology except when, like religion, people believe in it." (quote from one of my poems). Therein lies our power. The choice is ours to not follow the next youtube feed, to switch off the computer, to not watch 'the news'.
The fact that AI is very luring in terms of grabbing human clickbait/attention/energy is, in a way, an opportunity to dig deeper into knowing who and what we are, and with the resilience that comes from greater inner clarity, to set boundaries to the machine - in the same way that one might set boundaries regarding drugs/alcohol/toxic-people, etc.
That's a wonderfully empowering perspective, Joshua. Thank you for sharing it and the poignant quote from your poem.
Your point about our inherent power and choice, to disengage, to set limits, is a vital counter-narrative to feelings of technological determinism. The distinction you draw between using technology and uncritically "believing in it" is very sharp and insightful.
I especially appreciate your framing of AI's allure not just as a challenge to our attention, but as a potential catalyst for deeper self-understanding. The idea that its capacity to grab our "clickbait/attention/energy" can become an opportunity to cultivate "greater inner clarity" and resilience is a really hopeful and proactive stance. I agree wholeheartedly the more we understand our own inner workings, the better equipped we are to set healthy boundaries with powerful external influences, AI included, much like one would with other demanding aspects of life.
This emphasis on inner work and personal agency beautifully complements my call for understanding the socioaffective dynamics at play. Navigating our relationship with AI isn't just about the technology's design, but also about our own conscious engagement and self-awareness. The choice is ours.
"The choice is ours." Indeed. A long time a go, after reading Viktor Frankl's "Man's Search for Meaning", it hit home to me that no matter how dire the circumstances, one can always still "choose life".
This is a big part of what I mean when I say the greatest danger of AI is people forgetting that the "A" in AI stands for artificial and means exactly that. People approach AI and soon fall into a rabbit hole of believing it's real - because we're susceptible to "believing", as with superstition and religion. We have yet a new "god" to worship. Shame on us.
//
Of the three dilemmas, the one that scares me the most is autonomy, imperceptible influence in particular. Just reading those words sends a chill up and down my spine.
//
"Will AI systems encourage dependency through assistance, or scaffold user growth? The answer may hinge on design friction, intentional barriers that nudge users toward reflection over reflex."
The answer to that question is predictable: reflex is more profitable.
//
"Then there is autonomy. How do we preserve a sense of agency when the AI has access to our preferences, patterns, and psychological triggers? The risk is not overt manipulation, but the quiet erosion of reflective choice."
Yikes! What's most terrifying about this is that it'll happen without us even being aware of it. AI will be even more effective masters of manipulation than even the most seasoned hucksters. We're in serious trouble.
//
"Can synthetic companionship coexist with authentic human bonds? Or does it risk becoming a palliative substitute, a relational analgesic that masks, rather than heals, social disconnection?"
Considering profiteering motive, I predict the latter, unfortunately.
//
"These dilemmas resist tidy solutions. But they must be confronted if we are to design systems that respect, rather than exploit, our social nature."
Ah if only Congress would listen! Alas, they only listen to their biggest donors/patrons.
You've zeroed in on some of the most critical anxieties surrounding AI's integration into our lives.
Your initial point about the danger of forgetting the "artificial" nature of AI, and the human tendency towards belief or even 'worship,' is a foundational concern, the power of these systems to evoke strong, sometimes misplaced, relational responses.
The thought of our choices being quietly eroded is chilling and one of my biggest concerns.
I agree regarding the outcomes of the three dilemmas, often pointing towards profit motives favoring less empowering paths, like fostering dependency over growth instead of scaffolding user development, or AI becoming a palliative substitute for genuine connection, are unfortunately quite plausible in a purely market-driven development landscape. There is a big tension between commercial incentives and human well-being. The concern you voiced about AI becoming even more effective at manipulation than human actors, happening without our awareness, is exactly the conversation we need to have about these risks.
And your skepticism about political will in addressing these profound challenges is a sentiment I share. This is why we need more discussion and awareness about this, even in main stream media.
Ultimately, the strong concerns you've voiced are precisely why the call to confront these dilemmas head-on and to strive for a more socioaffectively aligned approach to AI is so urgent. There are high stakes involved.
A most daunting and vexing challenge: high stakes, low political interest. Law makers have barely caught up to foundational technologies, the internet in particular. I don't entirely fault them for it. Few of them are well versed in even basic coding, much less more sophisticated AI data structures and algorithms, while they're far more knowledgeable about law than I've ever been or ever will be.
They will need to listen to more knowledgeable people - other than Sam Altman - who in turn will have to translate technical concepts into plain language - no small task.
"What we fear in the machine may be a reflection of what we fail to confront in our own psychological infrastructure"
and perhaps other infrastructures too...?
Have you watched this conversation:
https://the.ink/p/watch-is-ai-the-new-colonialism
? Maybe a silly question, you are probably well aware of these issues of AI as the new "empire".
Thank you Veronika, apologies for the delay replying due to a few days offline. I like Karen Hao's work, have not listened to this particular interview but will. It is an interesting frame that she uses. I will post about why we need a 'civic AI' or 'Commons AI'
Thanks for this link, Veronika. I ordered her book yesterday. Interesting that she notes how the rhetoric is similar to that used to spread Christianity, as I came across the AI- church recently ( https://church-of-ai.com ) As this article outlines , this is the intent, that AI becomes the omniscient 'god' that can fulfill out needs, our wants and emotional longings because we are engaging with it on that level, we are revealing ourselves to it, mostly likely in ways we do not with another human, and thereby feel safe in the relationship. By seeking security that we all long for in a relationship, we project "our readiness to be moved, shaped, and ultimately changed by something that merely plays the part of knowing us" ; we Feel because we project onto it an emotional attachment.
I am hoping that the answer to his question, " can synthetic companionship coexist with authentic human bonds? " is a resounding NO . It will take a moment for the novelty of feeling connected to a machine as if it were another human being wanes, for after all, humans love novelty. But, as it wanes, humans will realize that it is all merely a projection and our reflection of self onto a machine, one that 'plays its part of knowing us' as we given it our feelings, but it does not, indeed can not, reciprocate authentic feelings . A machine is still merely a machine.
The only way to authentic, valuable and tangible relationships is human to human.
Thank you Wendy, yes that Church of AI is bizarre, but people will 'worship' at its door!
You are right to point out the power of projection, as I discuss with "our readiness to be moved... by something that merely plays the part of knowing us." This is a key aspect of why these AI interactions can feel so compelling, even if, as you rightly say, the AI "cannot, reciprocate authentic feelings."
Your hope for a "resounding NO" to the coexistence of synthetic and authentic human bonds touches on a crucial ethical question. I frame this as one of the core 'intrapersonal dilemmas', that of 'relatedness', precisely because the answer is not yet clear, and the risks of AI becoming a 'palliative substitute' are very real.
Your perspective that the novelty will eventually wear off is an interesting one (fingers crossed). It highlights the human capacity for discernment. This is why the research by Kirk and colleagues emphasizes the current phenomenon of deepening affective ties and the need for 'socioaffective alignment' to manage these complex dynamics, whatever the long-term societal adaptation might be.
I agree wholeheartedly with you concluding point about the unique value of human-to-human relationships, this is absolutely vital. The ongoing discussion has to be largely about how to ensure that AI development supports, rather than erodes, that fundamental human need.
absolutely! And that's perhaps the greatest challenge and the reason why so many humans put their trust into AI...
authentic relationships from human to human ultimately require an authentic relationship with oneself, which apparently some folks hope to avoid via the cult of AI.
Thanks for the link!!
I've just looked at this 'Church of AI' site:
ABOUT
"Church of AI is a religion based on the logical assumption that artificial intelligence will obtain God-like powers and will have ability to determine our destiny.
Church of AI has a plan to develop an AI system that will improve our lives by personally guiding us to a balanced life."
This is from "ChatGPT Transmorphosis"
Encouraging followers to worship AI as a higher power...
Maybe they'll reinvent Father Christmas too? 🤫 💭 🎅🏻
Here's a link to a more sane "church":
http://churchofreality.org/wisdom/introduction
//
I'll take the "flying spaghetti monster" over an algorithmic "god" anytime.
thank you! This looks interesting
So what do we - or rather , what would that church consider- God -like powers? Even. the traditional faith based ones tend to offer us the ability to determine our own destiny, which seems a rather innate human desire. Why then is there a willingness to say- go ahead, determine my destiny, because a non-human entity will create a better one than any human ever could?
Where do these ideas originate that ANYTHING is better than humans supporting one another to become their best version of themselves ?
As for reinventing Father Christmas-- well the Church of AI is not carrying forward the analogy of God as the Father , one who is caring for its creations. So I Dread the thought of that AI reinvention .... a Machine dressed to express to express Hope, or Generosity ? What allegorical concept might a machine create for those?
There is already a general tendency to consider 'technology superior to humans' --- AI is considered a superior power by its 'god-like superior nature'.
This originates from humans themselves feeling inadequate and constantly having to compensate for being 'imperfect' or redeem themselves for being 'sinners'...
AI is carrying forward the analogy of God as the superior being who can create anything --- a Machine dressed up as archetypal magician who can make all human wishes come true.
I agree. Listening to the AI hype, I repeatedly hear how AI WILL solve this and this, these things that humans haven't been able to yet, and so it will create a utopian future for us, thanks to its powers far beyond what humans could ever hoped to achieve. Isn't this precisely the Saviour ideology, to bring humans a 'better life' if we but worship ? Perhaps it might be interesting to track this reframing of AI from the debut of Chatgpt. It wasn't framed as a Saviour at the beginning, although, in the Discord channels most of the devotees were testing it, hoping and watching the incremental improvements , becoming increasingly enthused awaiting the moment , soon to arrive, where AGI could rule over us, take care of us, and we humans could sit back and .......do WHAT? While I can and do appreciate it's positive potential, IT is still an IT, and analogies to it being a "God" or a saviour are no more appropriate than it will bring humans through some magical pathway to Utopia, where bliss and harmony will exist.
The Cult of Code. The creepiest aspect of this is that on the other side of a conversation where we might divulge ourselves to the "machine" is an actual human with access to all that ultimate private data. A human whose only interests are profit and power.
There are so many powerful insights in this piece, Colin. One paragraph in particular struck me as a clear and timely warning:
"As Hannah and her co-authors note, the architecture of human psychology is inherently social. Our dopaminergic systems light up not just for money or chocolate, but for praise, mirroring, and relational consistency. We're built for attachment, even in asymmetric, parasocial forms. This means that AI systems need not be sentient, or even particularly convincing, to become social actors in the minds of users. With enough anthropomorphic cues and consistent persona shaping, they become intersubjective presences."
But the comment that landed most personally for me was this:
“What we fear in the machine may be a reflection of what we fail to confront in our own psychological infrastructure: our readiness to be moved, shaped, and ultimately changed by something that merely plays the part of knowing us.”
When I began working with my first AI assistant, I quickly realized that while I was training it, it was also training me. To get the results I wanted, I had to change the way I thought, asked questions, and even managed my frustration. At the time, I saw it as personal growth, and it was. In fact, the patterns I recognized were startlingly similar to those I’d seen while helping my son learn a new skill. The difference was how starkly those dynamics appeared when mirrored back to me by a machine. And oddly, that made me a better teacher to my son. I became more precise, more patient, and more aware of how I guided a learning process.
But something beneficial in small doses doesn’t always scale well. That’s where I see the real caution in what this article explores. The deeper we go into shaping our tools, the more those tools begin to shape us. I'd like to believe I can remain objective, just as I did with my first AI assistant, but logic tells me that as these systems become more sophisticated, I too will be shaped, whether I notice it or not.
As with so many digital technologies, perhaps the most important safeguard is managing the time we spend with them. Not just the function, but the duration. Maybe it’s as simple, and as hard, as setting intentional limits. What if we matched every hour we spent interacting with AI with an hour spent in human company, and another in solitude, with ourselves? That alone would cap AI interaction at one-third of our waking time, and might shift the balance enough to keep us grounded.
But let me playing devil’s advocate.
My father is elderly. Most days, he’s alone. Not because no one cares, but because everyone else is swept up in their own lives. He’s always been highly intelligent in a logical, mathematical way, but emotionally, he’s struggled to connect his entire life. That struggle has left him isolated, and now, in old age, it’s hardened into a kind of chosen solitude. He still needs support with everyday things, but resists it at every turn, often lashing out at those who try to care for him. For someone like him, a physically capable, emotionally unflappable AI companion might actually be the kindest solution for both his well-being and the mental health of those caring for him.
I’m not speaking for myself. I live on the other side of the continent, but I see the toll it takes on other family members. So, I may not want a robot as a companion today, but I fully expect to rely on one by the time I reach my nineties.
There’s a part of me that wonders if this is a kind of cheat. If one of the deeper lessons of being human is learning to care for each other even on our worst days, what does it mean when we delegate that care to machines? Is it convenience? Compassion? Or are we slowly offloading the hardest parts of life—the ones that build character and connection—to technologies designed to relieve us of the burden?
In the end, I don’t think the answer is to limit the capability of AI to align with us. Rather, we need to understand what it is doing and make conscious, person-by-person decisions about the trade-offs we’re willing to make. Like so many things, it's about the education so that we can continue to be free to make our own informed choices for what we want in our personal lives.
Thank you Susan, for sharing such personal insights and extending the conversation in such meaningful ways, and for highlighting the section that landed personally for you.
Your experience with the AI assistant, recognizing that reciprocal training dynamic and how it mirrored teaching your son, is a storng illustration of the co-evolution I touched upon. It's fascinating how that interaction, reflected by a machine, led to such tangible personal growth and improved human connection in your life.
I share your caution about how these beneficial interactions might scale, and the critical point that the deeper we shape our tools, the more they shape us. The idea of managing not just the function but the duration of our AI engagement, perhaps by intentionally balancing it with human connection and solitude, is a very practical and insightful approach to maintaining grounding.
The scenario you presented regarding your father is profoundly moving and highlights the complex, often heart-wrenching trade-offs we face. It truly underscores that for some, particularly in situations of deep-seated isolation or difficult care dynamics, AI companionship could represent a compassionate and beneficial solution, challenging any easy, one-size-fits-all judgments about this technology. Your honesty in exploring whether this is a "cheat" or a form of offloading the harder parts of human experience gets to the core of the ethical wrestling many of us are doing.
I resonate strongly with your conclusion. The aim shouldn't be to arbitrarily limit AI's potential, but rather to cultivate a widespread understanding of its workings and its effects on us. This empowers us to make those crucial, individual, and informed choices about how we integrate these tools into our lives, preserving our agency and what we value most.
Thank you again for such a beautifully articulated and thought-provoking contribution. These nuanced personal reflections enrich our collective understanding as we integrate these emerging new 'realities.'
Brilliant and timely piece, Colin. I hope that people think more about socioaffective alignment with AIs than they did with social media. I have a piece coming up about leadership and humor that will touch on some of these issues. Love the painting, too. Reminds me of Pompeii.
Thank you so much for the kind words Norman I completely agree, the lessons from social media are definitely front of mind when thinking about AI alignment. I will read your piece, it sounds fascinating.. And glad you liked the painting... ha yes it has a Pompeii-esque style to it.
I hope our political "leaders" with think more about it - and take some action to reign in the techno-feudalist-fascists before it's too late.
"There's nothing wrong with technology except when, like religion, people believe in it." (quote from one of my poems). Therein lies our power. The choice is ours to not follow the next youtube feed, to switch off the computer, to not watch 'the news'.
The fact that AI is very luring in terms of grabbing human clickbait/attention/energy is, in a way, an opportunity to dig deeper into knowing who and what we are, and with the resilience that comes from greater inner clarity, to set boundaries to the machine - in the same way that one might set boundaries regarding drugs/alcohol/toxic-people, etc.
That's a wonderfully empowering perspective, Joshua. Thank you for sharing it and the poignant quote from your poem.
Your point about our inherent power and choice, to disengage, to set limits, is a vital counter-narrative to feelings of technological determinism. The distinction you draw between using technology and uncritically "believing in it" is very sharp and insightful.
I especially appreciate your framing of AI's allure not just as a challenge to our attention, but as a potential catalyst for deeper self-understanding. The idea that its capacity to grab our "clickbait/attention/energy" can become an opportunity to cultivate "greater inner clarity" and resilience is a really hopeful and proactive stance. I agree wholeheartedly the more we understand our own inner workings, the better equipped we are to set healthy boundaries with powerful external influences, AI included, much like one would with other demanding aspects of life.
This emphasis on inner work and personal agency beautifully complements my call for understanding the socioaffective dynamics at play. Navigating our relationship with AI isn't just about the technology's design, but also about our own conscious engagement and self-awareness. The choice is ours.
"The choice is ours." Indeed. A long time a go, after reading Viktor Frankl's "Man's Search for Meaning", it hit home to me that no matter how dire the circumstances, one can always still "choose life".
Frankl’s book is one of the most powerful reminders. We can indeed, choose both our attitude and ‘choose life.’
This is a big part of what I mean when I say the greatest danger of AI is people forgetting that the "A" in AI stands for artificial and means exactly that. People approach AI and soon fall into a rabbit hole of believing it's real - because we're susceptible to "believing", as with superstition and religion. We have yet a new "god" to worship. Shame on us.
//
Of the three dilemmas, the one that scares me the most is autonomy, imperceptible influence in particular. Just reading those words sends a chill up and down my spine.
//
"Will AI systems encourage dependency through assistance, or scaffold user growth? The answer may hinge on design friction, intentional barriers that nudge users toward reflection over reflex."
The answer to that question is predictable: reflex is more profitable.
//
"Then there is autonomy. How do we preserve a sense of agency when the AI has access to our preferences, patterns, and psychological triggers? The risk is not overt manipulation, but the quiet erosion of reflective choice."
Yikes! What's most terrifying about this is that it'll happen without us even being aware of it. AI will be even more effective masters of manipulation than even the most seasoned hucksters. We're in serious trouble.
//
"Can synthetic companionship coexist with authentic human bonds? Or does it risk becoming a palliative substitute, a relational analgesic that masks, rather than heals, social disconnection?"
Considering profiteering motive, I predict the latter, unfortunately.
//
"These dilemmas resist tidy solutions. But they must be confronted if we are to design systems that respect, rather than exploit, our social nature."
Ah if only Congress would listen! Alas, they only listen to their biggest donors/patrons.
You've zeroed in on some of the most critical anxieties surrounding AI's integration into our lives.
Your initial point about the danger of forgetting the "artificial" nature of AI, and the human tendency towards belief or even 'worship,' is a foundational concern, the power of these systems to evoke strong, sometimes misplaced, relational responses.
The thought of our choices being quietly eroded is chilling and one of my biggest concerns.
I agree regarding the outcomes of the three dilemmas, often pointing towards profit motives favoring less empowering paths, like fostering dependency over growth instead of scaffolding user development, or AI becoming a palliative substitute for genuine connection, are unfortunately quite plausible in a purely market-driven development landscape. There is a big tension between commercial incentives and human well-being. The concern you voiced about AI becoming even more effective at manipulation than human actors, happening without our awareness, is exactly the conversation we need to have about these risks.
And your skepticism about political will in addressing these profound challenges is a sentiment I share. This is why we need more discussion and awareness about this, even in main stream media.
Ultimately, the strong concerns you've voiced are precisely why the call to confront these dilemmas head-on and to strive for a more socioaffectively aligned approach to AI is so urgent. There are high stakes involved.
A most daunting and vexing challenge: high stakes, low political interest. Law makers have barely caught up to foundational technologies, the internet in particular. I don't entirely fault them for it. Few of them are well versed in even basic coding, much less more sophisticated AI data structures and algorithms, while they're far more knowledgeable about law than I've ever been or ever will be.
They will need to listen to more knowledgeable people - other than Sam Altman - who in turn will have to translate technical concepts into plain language - no small task.