AI Economic Pit Is An Illusion
The Great Convergence And Scientific Breakthroughs
The Great Convergence
While headlines track the billions invested in artificial intelligence, the real story lies not in quarterly earnings but in something the public discourse often overlooks: the profound shifts that AI and Machine Learning are causing in fundamental science. These exciting discoveries, emerging across all fields, will forge a new paradigm for life on earth.
We stand at the edge of a generational inflection. The coming twenty‑five years may reshape our understanding of life, mind, and matter as profoundly as any of the past great shifts. When Copernicus unsettled the heavens, Newton revealed hidden laws, and Einstein bent time, each moment widened the frame of reality. Today, propelled by artificial intelligence and quantum discovery, we sense another fracture.
What was once dismissed as speculation, machines that do science, cells programmed like software, consciousness measured and perhaps replicated, is now edging from possibility to probability. The question is no longer whether the old certainties will fracture, but how soon, and how profoundly.
This cluster of near-horizon advances, in fields once considered separate provinces, physics, biology, mathematics, computer science, cognitive science, are now edging toward one another, like tectonic plates. Their collision threatens to remake the continent of knowledge.
Fundamental
Artificial intelligence is the leading edge of this shift. What began as a statistical parlor trick, stochastic parrots, has become a general instrument of discovery. Rich Sutton, during his Turing Award acceptance speech, framed the underlying philosophy clearly:
“The main idea of reinforcement learning is that a machine might discover what to do on its own, without being told, from its own experience, by trial and error.”
AlphaFold embodied that ethos in 2020, stunning the biological community by predicting the 3D structures of proteins with uncanny accuracy and solving a 50-year-old grand challenge in biology. Its impact echoed the 1953 discovery of the DNA double helix, when Watson and Crick redefined biology by revealing its hidden architecture.
Over 2 million researchers in more than 190 countries use the AlphaFold Protein Structure Database and related tools to study protein structures, with this number continuing to grow as AlphaFold becomes an essential tool for accelerating scientific research, particularly in fields like drug development, vaccine development, and understanding diseases.
Some doctors and medical science researchers even claim “It’s unethical for doctors not to use AI. Soon it will be malpractice” not to use AI for certain diagnosis.
Today, AI systems are proposing mathematical conjectures, predicting protein interactions, designing drugs, and in some cases devising experiments beyond the imagination of their human handlers. “The end of theory” was once a glib New Yorker headline; today, it begins to feel literal. Science is building a machine that can do science. The implications are not only practical (shorter drug pipelines, new materials) but existential. When intelligence ceases to be a uniquely human monopoly, the Enlightenment’s quiet assumption, that reason is our species’ birthright, collapses.
Biology, meanwhile, is undergoing its own Copernican shift. For centuries, we imagined life as vital and ineffable, resistant to reduction. Now, researchers at MIT and Stanford program yeast cells to produce entirely new compounds, like biological factories. CRISPR edits genomes with precision, while synthetic biologists at the J. Craig Venter Institute assembled the first fully synthetic organism. And in the UK, chemists recently coaxed protocells from inanimate chemistry, hinting at plausible pathways for life’s origins. The frontier is not just medical but ontological: when life can be written, edited, and rebooted, the line between the animate and the synthetic grows thin. What does “nature” mean when it can be assembled from scratch?
Sentience
Cognitive science is converging with both. Competing theories of consciousness, Integrated Information Theory, Global Workspace, predictive processing, once floated in a fog of speculation. Now, armed with multimodal brain imaging and computational modeling, researchers are designing experiments that can adjudicate among them. At UCSF, paralyzed patients using brain-computer interfaces have learned to translate thought into text at 78/80 words per minute. Implants by Neuralink have led to humans being able to control a computer with their mind by intention alone.
Just as Broca’s discovery of speech centers in the 19th century reframed the brain as a mappable landscape rather than a black box, these modern breakthroughs suggest that consciousness itself may one day be localized, measured, perhaps even replicated. The moral map of the species will have to be redrawn. What rights belong to an entity whose consciousness can be demonstrated? What responsibilities to a machine that passes not only the Turing test but the test of suffering?
Mathematics is not immune. In 2021, mathematicians and AI researchers announced new conjectures in knot theory, discovered by machine learning systems trawling through the vast forests of algebraic data. Proof assistants like Lean are increasingly used to formalize entire areas of mathematics, ensuring correctness beyond human fallibility. The Langlands program, once considered the Mount Everest of modern mathematics, is now being chipped away by an odd alliance of human intuition and machine suggestion. The shift feels reminiscent of the 17th century, when Newton and Leibniz independently conjured calculus, an alien symbolic language that restructured science overnight. When an AI proposes a conjecture no human can parse, yet proves it within a formal system, is that mathematics, or something new, an alien branch of thought that merely shares our symbols?
Paradox
I use paradox here to mean that we observe something that defies common sense; not that we get different, contradictory answers using logic.
Physics remains the most resistant frontier, yet cracks are showing. In recent years, researchers working on the holographic principle have shown how spacetime geometry might emerge from patterns of quantum entanglement. The famous AdS/CFT correspondence, a mathematical duality between a gravity theory and a quantum field theory, suggests that our universe may be describable as a kind of hologram, reality projected from information encoded elsewhere. Experiments at Google’s quantum lab have begun to simulate such holographic wormholes on quantum computers. To watch this unfold is to recall Galileo’s telescope once more: where he saw mountains on the moon, today we glimpse geometry emerging from information. If true, then the cosmos is not a container but a process, a computation that begets geometry as a by-product. The distinction between physics and information science begins to dissolve.
Cambrian Explosion
Individually, these revolutions would be disorienting. Taken together, they suggest a deeper convergence: reality as fundamentally informational, computational, and emergent. Life, mind, and matter appear less like separate kingdoms than different scales of the same unfolding process.
The consequences are both pragmatic and existential. Pragmatically, we will harness these advances to extend lives, program matter, accelerate discovery, and perhaps tame energy. But existentially, we will confront a more unnerving prospect: that the categories we have relied on, human versus machine, natural versus artificial, physical versus mental, are artifacts of our ignorance. We are poised to glimpse a unified picture, but one that strips us of the privilege of centrality.
How should we spend our hours and years? The next twenty-five years will decide how we navigate this convergence. We can cling to old certainties and retreat into nostalgia, or we can live as transitional creatures, inhabiting the liminal space between ignorance and knowledge, biology and technology, matter and meaning.
We are on the cusp of a Cambrian explosion in scientific discovery; human ingenuity, amplified by AI, is the skeleton key. The task is not to own the truth but to steward it, to stay curious, provisional, and unafraid of what we might discover.
Stay curious
Colin
Note: I do believe that some VC investments are chasing illogical investments into so called AI ‘wrappers’ and there will be bankruptcies. I always advise users to choose the AI tools with major tech backers.
Image credit - Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems



I'm not aiming for pessimism. One of my favorite reminders is, "Reality always wins; your job is to get in touch with it." The real world is complex, with forces within and beyond our control. I agree that many changes will arrive over time. Still, I hesitate to call this humanity's crowning achievement—or to accept claims that these technologies will solve all our problems—without a framework that equitably provides access.
What does history tell us?
History shows that transformative technologies take decades to diffuse. Electricity reached urban households within 30–40 years of the 1880s rollout, about half of U.S. homes by 1925, and nearly universal coverage only after the 1936 Rural Electrification Act, by around 1960. Likewise, the Human Genome Project was an enabling step, not an instant cure-all: real progress has required years of complementary investment. We should expect similar trajectories for AI, quantum computing, and genetic engineering.
Will technology and humans be able to overcome the last-mile challenge?
The "last mile" will be hard. Integration, edge cases, skills, regulation, and cultural change are slow, and job disruption and inequality can amplify resistance. At the same time, we must anticipate unintended consequences: powerful tools in the wrong hands can yield catastrophic bio or other mass-destruction risks.
What could go wrong?
The "designer baby" market underscores the point. More here: https://tinyurl.com/yc7wze27. Wealthy parents now pay up to $50,000 for embryo screening that promises IQ selection. This fascination with “genetic optimization” often reflects deeper beliefs about merit and control—replicating perceived "good genes" in the next generation. Yet today's technology offers only marginal, uncertain IQ gains per IVF cycle.
What are some of the key risks associated with the designer babies market?
Inequality: Even small perceived advantages can entrench privilege and compound biological, educational, and economic gaps, exacerbating global divides.
Eugenic Norms: Pressure to "optimize" risks stigmatizing those who don't or can't participate, reinforcing harmful biases against disability and neurodiversity.
Unintended Genetics: Selecting for one trait can raise risks for others due to pleiotropy or poorly understood trade-offs.
Governance Arbitrage: Weak rules drive services offshore to low-oversight jurisdictions, spreading risks globally.
Genomic Privacy: Sensitive DNA data is vulnerable to misuse for insurance, employment, or profiling, creating new avenues for exploitation.
Why Focus on Human IQ if AGI/ASI Is Coming?
Uncertain AGI timelines invite hedging on human capital, but the emphasis on genetic IQ also reflects cultural and ideological beliefs about merit and control. In practice, broad, non-genetic augmentation—education, health, and equitable access to capable AI delivers higher social returns and greater resilience regardless of AGI's arrival.
Would Global Disparities Widen the Risks?
The global impact of these technologies cannot be ignored. High upfront costs mean that developed countries will capture most benefits, while developing nations are left to bear disproportionate risks. This dynamic risks exacerbating existing inequalities and creating a dangerous cycle of global instability. Developing countries—facing limited access to these technologies—will likely contend with the fallout: civil wars, social unrest, and deepening inequality. Without equitable frameworks, these risks will become the world's shared burden.
What can we learn from the Human Genome Project?
Lessons from the Human Genome Project remind us that breakthroughs enable; they don't automatically deliver. We need complementary investments in delivery systems, infrastructure, and governance to realize benefits. The same holds for AI and bioengineering: without robust safeguards and equitable policies, benefits will diffuse slowly, while risks of inequality and misuse will arrive early.
To conclude: Diffusion of benefits will take decades, but front-loaded risks are already here. Today's "designer baby" offerings don't create superhumans; they develop markets for inequality and false certainty. The wiser path is cultivating broad human potential and building strong, inclusive governance for emerging technologies. If AGI/ASI arrives, resilient institutions and widely shared capability will matter far more than marginal genetic tweaks, and if it doesn't, those investments will still pay off. To avoid leaving developing nations behind, we must prioritize global coordination, equitable access, and policies that ensure the risks of these technologies do not disproportionately fall on the world's most vulnerable populations.
As William Gibson put it, some parts of the future arrive early, and we may see significant improvements in specific sub-domains. Still, the broader impact takes time: "The future is already here, it's just not evenly distributed."
Colin, what a wonder-filled and inspiring recap of the potential AI offers us, the other prong of the AI paradigm shift. Thank you
We do have a choice , whether to retreat in fear over the uncertainty or to embrace this change and challenge, as humans have always done.
Retreat is not an option.
Ongoing and intense engagement is the only option, as the stakes for our humanity are considerably higher than in the past.
We are culpable for what lies ahead.