The brain is an epistemological object, not just a biological one.
In the standard history of AI, one name is often missing, yet he was the man who first tried to bind the machine to the mind: Warren Sturgis McCulloch. Neuropsychiatrist, philosopher, medical doctor, poet, engineer, Professor of Psychiatry, head of Cybernetics at MIT and father to a theory that would be crowned the progenitor of artificial intelligence.
While the story of AI’s conception tends to pivot around the usual suspects, Turing, Wiener, von Neumann, McCarthy, Shannon, etc., McCulloch, with his gleeful polymathy and relentless refusal to respect the fences between disciplines, constructed something more disruptive than a technical model: he gave epistemology a wet lab, and the brain a formal grammar.
It is fitting, almost parodic, that the man whose name would be linked to the first formal model of a thinking machine was trained not as a computer scientist (the term barely existed) but as a physician steeped in philosophy. Born in 1898, McCulloch studied philosophy at Yale before completing an M.D. at Columbia with his speciality ‘understanding the physiology of the nervous system’. His philosophical leanings were not dilettantish; they were primary. Immanuel Kant's synthetic a priori haunted him early and decisively, birthing a lifelong obsession with how sensation could become knowledge.
In other words, McCulloch was not interested in the mind per se. He was interested in thinking as process, and what kind of machinery could support it. Or simulate it. Or explain it.
By the early 1930s, McCulloch had aligned himself with experimental neurology, most notably through his collaboration with Dutch psychiatrist and physiologist Dusser de Barenne at Yale. Their focus was cerebral localization: not a speculative endeavor but a brutal one, involving chemical strychninization of monkey cortices to chart neurophysiological function. This was not idle science. It was an attempt to reduce the mind-body problem to observable, recordable pulses. McCulloch was asking how particular patterns of stimulation correspond to perception, and whether the brain could be modeled not just as a collection of cells, but as a computational logic system. This was not metaphor. It was mechanics.
In 1943, alongside the brilliant Walter Pitts, a homeless autodidact logician who had taught himself Latin and Russellian logic before age twenty, McCulloch published “A Logical Calculus of the Ideas Immanent in Nervous Activity.” The title sounds forbidding. The content was, and remains, revolutionary. They proposed that neurons, understood as binary threshold units (on or off), could be represented as logic gates. A sufficiently complex network of such units could, in principle, compute anything a Turing machine could. Brain-as-machine was not a metaphor; it was now a theorem.
Yet McCulloch was not seduced by pure reductionism. For him, the appeal of their model was not that it “explained” the mind but that it enabled questions previously intractable. As J.Y. Lettvin later wrote, the theory’s power was not in its correctness:
“[It] is certainly wrong, but in its wrongness, just as Bohr's wrongness in his view of the atom, it contains the seeds of new theory.”
This captures the essence of McCulloch's project. What can be computed? What must be lost in the process? Borrowing from Gibbs and Shannon, McCulloch appreciated that information, once lost in a system, could not be retrieved. This limitation, far from being a flaw, was the groundwork for a theory of perception as constraint, an epistemology of the imperfect knower.
Thus his model was not merely technical; it was philosophical. In the machine’s limits, he saw the boundaries of human understanding.
Still, the historical irony is sharp: McCulloch's “logical calculus” paper would, in later years, be read as a founding document of artificial intelligence. But he himself never used the term. AI, as it emerged at Dartmouth in 1956, was characterized by a brash optimism in symbolic processing, a belief that intelligence was mostly a matter of manipulating representations.
McCulloch, ever skeptical, believed cognition was more embodied, more recalcitrant to simulation. He delighted in paradox, not solution. He suspected that the messiness of mind might elude machine formalisms precisely because the brain was not a logic engine, but a value-seeking organism.
His later work on heterarchies and many-valued logics, published in papers like “The Heterarchy of Values Determined by the Topology of Nervous Nets”, betrayed deep discomfort with hierarchical or linear architectures.
His influence spread not only through publications but through the strange, magnetic personality he projected in real life: the Scotch-drinking, chain-smoking, sonnet-writing eccentric who turned his home into a salon and invited students to argue, eat, and speculate freely. See this interview and this one, for his eccentric brilliance.
As his protégé Jerome Lettvin in his essay Warren McCulloch and the Origins of AI put it, McCulloch influenced him more than any other in how he treated “the materials of science, the ideas of science, and the poetry of science.”
Lettvin also once said McCulloch did not mentor so much as conscript, drafting others into his wild hypotheses and leaving them changed. This captures the spirit of a man later described as a “rebel genius”, someone for whom, as one biographer noted,
“a genius is hard to understand. A rebel has no interest in being understood.”
McCulloch himself once remarked, when asked what he considered his legacy: “Oh, that's very easy. It's the youngsters who work with me.”
His role in the Macy Conferences on Cybernetics (1946–1953) was as instigator and harmonizer, a chairman with a preacher's cadence and a philosopher's disregard for conclusions.
It is almost tragicomic that the very theory McCulloch built, a logical calculus of neural nets, would be adopted and hardened into the foundational mythos of AI, even as its creator doubted its sufficiency. The paper with Pitts contained, as J.Y. Lettvin later remarked, “the flaw that lay in the theory Jack built.”
That flaw was not a technical error but a philosophical omission: it mistook computability for cognition. The theory could model symbol manipulation but not meaning. It could simulate inference but not understanding. McCulloch knew this. But in a world racing toward automation, his subtleties were flattened. As the symbolic era of AI dawned, McCulloch turned instead toward many-valued logics, recursive paradoxes, and the impossibility of a complete description of perception from within perception itself.
This, ultimately, is his legacy: not the diagram of the neuron as logic gate, but the assertion that the brain is an epistemological object, not just a biological one. McCulloch’s true contribution lies in opening the possibility of treating perception, cognition, and knowledge as simultaneously material and abstract, scientific and philosophical.
Warren S. McCulloch built not the machine, but the idea of its possibility, and, crucially, its limits. For that, we owe him not just a lineage, but a warning. He taught us that to think seriously about mind and machine is not to draw a blueprint but to walk a tightrope between poetry and logic, faith and calculation. In that balance, McCulloch did not resolve the paradox. He embodied it.
Stay curious
Colin
Recommended reading - McCulloch in his own words
Note - Lettvin also states in his essay, Warren McCulloch and the Origins of AI:
In the heritage of McCulloch and Pitts, there was an additional factor. Leibnitz in the 17th century had designed, although he had not been able to build, the first logical machine. This was not known until the 1950's, when interest in logical machines began again. Leibnitz had said of such a machine that in the future when philosophers disagree they will not fight with each other but say to each other, “let us sit down and compute.”
Amazing content, Colin!
I read Dan Brown’s “Origin” and was so intrigued by Santiago Ramo’n Cajal, the father of modern neuroscience, Nobel laureate, artist, and the fact that he perceived neurons, proven in the 1950’s by microscopy
half a century after his work. So taken with his work I ordered “The Beautiful Brain” which has his drawings.
Thank you for sharing your brain and amazing posts that so beautifully illuminate and educate, easily accessible with no microscopy required, just diligent research that is accessible and well written.
"Their focus was cerebral localization: not a speculative endeavor but a brutal one, involving chemical strychninization of monkey cortices to chart neurophysiological function...It was an attempt to reduce the mind-body problem to observable, recordable pulses".
Cruel, but they didn't have an fMRI.
.
"They proposed that neurons, understood as binary threshold units (on or off)"
The original transistor.
.
"For instance, the soma of a neuron can vary from 4 to 100 micrometers in diameter" https://en.wikipedia.org/wiki/Neuron
Meanwhile, Chinese researchers have created a transistor with a gate length about 1/3nm https://spectrum.ieee.org/smallest-transistor-one-carbon-atom
.
"McCulloch turned instead toward many-valued logics, recursive paradoxes, and the impossibility of a complete description of perception from within perception itself".
That just might be the distinction between artificial intelligence and real intelligence.