Artificial Humanities
Nina Beguš and the Responsibility of Authorship
I am currently catching up on my book reviews, pulling from notes I wrote just after reading the books. I believe this one, Artificial Humanities, is particularly important. While we see continued declines in Humanities courses, there is a clear and urgent call for a return to philosophy. I have written about this need on several occasions, along with the necessity of deeper book reading. In parallel, I highly recommend an excellent Substack post by Norman Sandridge on Leadership Development from the Humanities
“Human language is not just ours anymore, and yet we tend to disregard the philosophical stakes of this colossal transformation” ~ Nina Beguš
The Responsibility of Authorship
Is it possible that we have been asking the wrong question about artificial intelligence? Instead of asking whether machines can become more like us, Nina Beguš invites us to ask what might happen if we finally designed them to be something else… you know machines!
The fascinating book that Beguš, a researcher and lecturer at the University of California, Berkeley, has written, Artificial Humanities, is not an argument against technology. It is an argument against intellectual laziness. It begins from a generous premise that our machines are not merely technical artifacts but cultural ones, born from stories, metaphors, habits of thought, and inherited dreams. Long before computers, we rehearsed these dreams in myth, theatre, novels, and film. We practiced them until they hardened into instinct. This has more than likely created a false dichotomy. When contemporary AI systems speak, we respond not as engineers but as readers of Ovid, viewers of cinema, and heirs to a very old story about creation and control.
The Mechanistic View of Language
The book traces a philosophical lineage where language began to be viewed as a technicality rather than an expression of the soul. In the late nineteenth century, technologies like stenography contributed to language being seen as a mechanical process. Alan Turing later adopted a mechanistic view of language, suggesting that if a machine could act indistinguishably from a human through complex imitation, it could be said to understand.
Beguš’s central achievement is to make this inheritance visible without moral grandstanding. She shows how the Pygmalion myth became the default script for thinking about intelligent machines. In George Bernard Shaw’s play, Eliza Doolittle provides a striking literary example of this. During her first “test,” her speech is described as scripted and mechanical: she answers small talk about the weather with a formal, learned forecast report. Her replies are a product of training rather than spontaneous thought.
This literary script found a direct digital successor in the first chatbot, ELIZA. Joseph Weizenbaum’s creation used avoidance strategies and scripted answers that were shallow and mechanized. It relied on mirroring the user’s language without genuine understanding. Once you see this pattern, you begin spotting it everywhere: in chatbots trained to please, in social robots designed to charm, and in virtual assistants engineered to sound agreeable rather than capable. Of course we keep building systems that pretend to be human. We have been practicing that pose for two thousand years.
A Shared Civic Task
What lifts the book into something genuinely energizing is its refusal to stop at diagnosis. Artificial humanities is proposed not as a critique perched safely on the sidelines but as an operating principle. As Beguš puts it with disarming directness,
“The responsibility of making these technologies is too big for the technologists to bear it alone.”
This shifts AI from a specialist project to a shared civic and cultural task, one that cannot be solved by more parameters or faster chips alone. The humanities are uniquely equipped to study interactive systems because they have spent centuries analyzing language, agency, interpretation, power, silence, and mis-recognition. This matters because, as Beguš writes,
“Human language is not just ours anymore, and yet we tend to disregard the philosophical stakes of this colossal transformation.”
The Illusion of Neutrality
One of the book’s most bracing moves is its insistence that human imitation is not a neutral design choice. When we force machines into human likeness, we also import human hierarchies, exclusions, and fantasies. The long history of Galatea figures, mute at first and later permitted speech under specific conditions, exposes how often language has functioned as a gate rather than a bridge.
The lesson here is not that language is dangerous. The lesson is that treating language as proof of inner life or moral worth has always been a mistake. Machines that speak fluently do not need to be treated as conscious people, and people who struggle to speak have never been less human. That clarity feels like intellectual oxygen.
Recovery and Repair
There is an unexpected tenderness in the historical sections on early speaking machines and computing pioneers. We see Erasmus Darwin building a vocal apparatus while living with a stutter. We find the Bell family navigating deafness while inventing technologies of speech. We see Ada Lovelace imagining a machine that could surprise us without claiming it could replace us. These are not tales of hubris; they are stories of repair, curiosity, and hope. Technology in this context is not conquest but response.
A New Ethical Landscape
The argument culminates in a proposal that feels both radical and obvious once stated. Beguš states it plainly:
“Even if we might perceive the AI system as humanlike, it is essential to conceptualize, build, and use AI as a fundamentally nonhuman agent.”
AI should be designed as a fundamentally nonhuman agent: not inferior, not superior, but different. This is not a retreat from responsibility; it is an acceptance of it. When we stop pretending that machines are nascent humans, we are forced to take responsibility for what they actually are: systems shaped by choices, values, training data, and cultural frames. Responsibility returns to the builder, not the artifact.
Reading this book provides the same pleasure one feels when a complicated proof suddenly resolves into a clean idea. The humanities are not late to AI; they have been there all along, hiding in the metaphors. Artificial Humanities gives them a proper role, not as ornamental critics but as collaborators in building systems that expand rather than shrink our imagination.
Stay curious,
Colin



Brilliant analysis of Beguš's reframing. The shift from "can machines think like us" to "what if we designed them differently" is kinda what formal verification has been doing all along with proof assistants. I remember working thru some Coq proofs and realizing the system wasn't trying to mimic human reasoning, it was forcing me to think in its terms, which actually made my logic tighter. Maybe the real win isnt making AI more humanlike but letting their non-human nature expose gaps in our own thinkng.
This is a soul-utionary portal-opener: "refusal to stop at diagnosis". AWESOME, 1%!