Brilliant analysis of Beguš's reframing. The shift from "can machines think like us" to "what if we designed them differently" is kinda what formal verification has been doing all along with proof assistants. I remember working thru some Coq proofs and realizing the system wasn't trying to mimic human reasoning, it was forcing me to think in its terms, which actually made my logic tighter. Maybe the real win isnt making AI more humanlike but letting their non-human nature expose gaps in our own thinkng.
Thank you. Great point on what formal verification highlights exactly what Beguš means when she advocates for treating AI as a fundamentally nonhuman agent.
The value of systems like Coq lies precisely in their refusal to mimic human intuition.
That is well said, we should use these systems (LLMs) to identify the gaps, assumptions, and "smudges" in human logic. It is the friction of that nonhuman nature that produces the tighter reasoning you mentioned.
Thank you for sharing that perspective: it is a perfect technical validation of the book's philosophical core.
"The lesson is that treating language as proof of inner life or moral worth has always been a mistake". A most familiar refrain for me.
.
"Machines that speak fluently do not need to be treated as conscious people, and people who struggle to speak have never been less human". Stephen Hawking immediately springs to mind here.
.
The big question: Who will get the message to the purveyors/profiteers? They need to hear this more than anyone.
Thank you Winston. The example of Stephen Hawking is perhaps the most profound evidence we have for why we must never conflate the physical or mechanical production of speech with the depth of the mind behind it. It reinforces Beguš’s point beautifully: if we use "fluent language" as the primary yardstick for personhood, we fail both the humans who struggle with it and the machines we over-attribute consciousness to.
As for your big question, that is the trillion-dollar challenge. The "purveyors and profiteers" are often incentivized to lean into the illusion of personhood because it creates emotional stickiness and market hype. Getting them to hear this message requires a shift in how we regulate and design these systems. We need to move from a culture of "imitation at all costs" to one of "functional transparency."
It starts with voices like yours and Beguš’s insisting that the humanities are not just observers of this transition, but essential architects of its ethics.
Brilliant analysis of Beguš's reframing. The shift from "can machines think like us" to "what if we designed them differently" is kinda what formal verification has been doing all along with proof assistants. I remember working thru some Coq proofs and realizing the system wasn't trying to mimic human reasoning, it was forcing me to think in its terms, which actually made my logic tighter. Maybe the real win isnt making AI more humanlike but letting their non-human nature expose gaps in our own thinkng.
Thank you. Great point on what formal verification highlights exactly what Beguš means when she advocates for treating AI as a fundamentally nonhuman agent.
The value of systems like Coq lies precisely in their refusal to mimic human intuition.
That is well said, we should use these systems (LLMs) to identify the gaps, assumptions, and "smudges" in human logic. It is the friction of that nonhuman nature that produces the tighter reasoning you mentioned.
Thank you for sharing that perspective: it is a perfect technical validation of the book's philosophical core.
This is a soul-utionary portal-opener: "refusal to stop at diagnosis". AWESOME, 1%!
"Soul-utionary portal opener" I love it. Thank you Leah.
<smile>
"The lesson is that treating language as proof of inner life or moral worth has always been a mistake". A most familiar refrain for me.
.
"Machines that speak fluently do not need to be treated as conscious people, and people who struggle to speak have never been less human". Stephen Hawking immediately springs to mind here.
.
The big question: Who will get the message to the purveyors/profiteers? They need to hear this more than anyone.
Thank you Winston. The example of Stephen Hawking is perhaps the most profound evidence we have for why we must never conflate the physical or mechanical production of speech with the depth of the mind behind it. It reinforces Beguš’s point beautifully: if we use "fluent language" as the primary yardstick for personhood, we fail both the humans who struggle with it and the machines we over-attribute consciousness to.
As for your big question, that is the trillion-dollar challenge. The "purveyors and profiteers" are often incentivized to lean into the illusion of personhood because it creates emotional stickiness and market hype. Getting them to hear this message requires a shift in how we regulate and design these systems. We need to move from a culture of "imitation at all costs" to one of "functional transparency."
It starts with voices like yours and Beguš’s insisting that the humanities are not just observers of this transition, but essential architects of its ethics.
I feel like the tree in that old dictum: if a tree fell in the forest and there's no one around to hear it, did it really fall?