Optimism on AI
The Dawn Of System 2 Thinking, And Our New Post-Wittgenstein optimism.
The greatest benefit of the arrival of artificial intelligence is that it will not merely solve our problems; it will define our potential. We are moving from a world where we ask what the machine can do, to a world where we are brave enough to imagine new horizons.
Thoughts on the AI Architecture of 2025
In 2025, the world finally stopped asking whether AI could think and began paying attention to how well it could reason. I spent much of the year watching the boundary between me and the tool as a collaborator grow less rigid, not through spectacle, but through use. This was particularly evident in my own work using these systems to code a large economic trade platform, using Opus 4.5 with Claude Code. What changed was not that machines suddenly became clever. It was that reasoning itself became easier to practice, easier to extend, and easier to share.
2025 was the year the infrastructure caught up with the ambition. The idea of the AI Factory stopped sounding metaphorical and started describing something literal: gigawatt-scale systems built not merely to store data, but to sustain long chains of inference. We are no longer only training models. We are maintaining environments in which reasoning can persist. That shift matters more than any single benchmark.
The Post‑Wittgenstein Optimism
Throughout the year I found myself returning, again and again, to what might be called a post‑Wittgenstein optimism. If we can imagine a discovery and articulate it, we increasingly possess a tool capable of articulating the steps required to reach it. This is not the brittle automation of earlier decades. It is interactive, provisional, and surprisingly aligned with humane needs, especially in scientific discovery.
When Google DeepMind’s Gemini reached gold‑medal performance at the International Mathematical Olympiad, the achievement was not merely technical. It demonstrated that human curiosity, paired with machine discipline, can now move through intellectual terrain once reserved for solitary genius. We are witnessing a jagged diffusion of brilliance, a quiet removal of the ceiling that once constrained what a single mind could realistically explore.
Google CEO Sundar Pichai even coined “AJI” (Artificial Jagged Intelligence) for this phase, a precursor to AGI. This jaggedness highlights AI's strengths in pattern recognition but weaknesses in true understanding, requiring human oversight and strategic use.
The scale of what supports this ‘partnership’ remains almost absurd. As Google DeepMind’s Zhengdong Wang observed in his 2025 letter, the mathematical operations consumed in training today’s models exceed the number of stars in the observable universe. There is a gentle irony in this. We have built something vast, only to discover that its most valuable work lies not in cosmic answers, but in careful assistance with earthly problems.
In a wide ranging Financial Times interview, AI scientist Yann Le Cun was asked what do you want your legacy to be. He replied, without batting an eyelid: “Increasing the amount of intelligence in the world”, “Intelligence is really the thing that we should have more of,” he says, adding that with more intelligence, there’s less human suffering, more rational decisions, and more understanding of the world and the universe.
The Shift to System 2
The pivot point of the year was the shift from the speed of answers to the quality of thought. We entered what can fairly be called the inference era, defined by test‑time compute and by systems finally being given the space to think before they speak. For years, we relied on a digital System One: fast, intuitive, and confident even when wrong. In 2025, the field matured into System Two.
By allowing architectures to iterate through internal chains of logic, techniques popularized by OpenAI’s o1 and echoed across open‑source reasoning models, hallucination gave way to structured search. Model intelligence began to look less like instant recall and more like disciplined deliberation. The reasoning budget itself became a meaningful variable. We could decide how much thought a problem deserved. The key premise is simple: AI should not think for us. It should make us think.
I found a quiet satisfaction in this new economy of thinking. Intelligence revealed itself not as a pile of processors, but as judgment about where to spend computational effort. Whether predicting protein interactions at the University of Texas Southwestern Medical Center and the University of Washington or advancing formal proofs at DeepMind, thought became an intentional investment rather than a by‑product.
Intention in the Loop
Some of the most meaningful progress appeared where action followed deliberation. Nowhere was this clearer than in robotics. For decades, robots failed in visible, almost theatrical ways. They collided, stalled, and misjudged, see my old Robotenomics blog for many examples. This year, systems like Gemini Robotics began to pause, model their surroundings, and choose a course of action. The importance is not that they move more smoothly. It is that intention has entered the loop.
Elsewhere, the gains were quieter but no less consequential. Projects such as Cell2Sentence used language models to interpret cellular behavior, opening new pathways in cancer research. When David Liu received the Breakthrough Prize for advances in gene editing, the story felt continuous rather than coincidental. We are no longer confined to observation. We are beginning to work with biology deliberately.
The Rise of the Co‑Scientist
What ties these developments together is the rise of AI as a “co‑scientist”, as many like to call it. These systems are helping advance theoretical computer science, reconstruct linguistic histories, and test hypotheses that once demanded years of coordinated human effort. Energy and compute are being transformed into something rarer than output. They are becoming insight.
At a moment when public life often feels fractured, 2025 offered an unusual point of agreement. Progress here is not a contest between humans and machines. It is an expansion of collaboration. We are not merely solving problems more quickly. We are learning to ask better ones.
Here I am reminded of the excellent point by Douglas Adams in his book The Hitchhiker’s Guide to the Galaxy that a vague question yields a meaningless answer. In the age of AI, the process of defining what you truly want to know, of understanding the “Ultimate Question”, is becoming more important than simply generating an answer.
As we look toward 2026, the question is no longer what models can do for us, but what we are courageous enough to ask of them. For anyone still guided by curiosity, it has been an unusually good year to be alive.
Stay curious
Colin
Notes and Further Reading
The Post-AGI Team: For a fascinating look into the future of this trajectory, I recommend Shane Legg’s 2025 interviews with Professor Hannah Fry regarding the newly formed Post AGI team at Google DeepMind and their mission to navigate the transition to human-level reasoning. The main image above is taken from this interview: Google DeepMind cofounder Shane Legg discussing the Post AGI team in this interview.
On the Scaling of Reason: The observation regarding mathematical operations and the observable universe is drawn from Google DeepMind employee Zhengdong Wang’s “2025 Letter,” which provides a startling perspective on the sheer physical magnitude of current training runs.
The Mathematical Milestone: For a technical deep-dive into the IMO achievement, see DeepMind’s “AI achieves silver-medal standard at the International Mathematical Olympiad” (and subsequent 2025 updates on Gold-standard reasoning). The shift here was moving from simple pattern matching to formal mathematical verification.
The Jagged Frontier: Ethan Mollick and colleagues first coined the phrase Jagged Frontier, now widely used in AI circles.
The Inference Era and “System 2”: The transition toward test-time compute—allowing models to “think” before they speak—is best articulated in the technical reports surrounding OpenAI’s o1-series and the research into Scaling Laws for Test-Time Compute. This paradigm shift suggests that “thinking longer” can be as effective as “training larger.”
On Intention and Robotics: The progress in embodied AI is exemplified by the Gemini Robotics 1.5 release, which moved away from rigid programming toward “vision-language-action” models. For a more philosophical take on why this matters, I recommend Sam Weston’s “2025 Faves,” which explores the intersection of digital agency and physical presence.
Biology as Language: The Cell2Sentence project (originally out of the Broad Institute and MIT) represents a fundamental shift in transcriptomics, treating gene expression sequences as “sentences” to be parsed by LLMs. This work was a key context for the 2025 Breakthrough Prize awarded to David Liu for his pioneering work in base and prime editing, which moves us closer to a “programmatic” understanding of the genome.
The Polymathic Future: My framing of “post-Wittgenstein optimism” is a nod to the idea that the limits of our language no longer define the limits of our world, provided we have a collaborator capable of translating imagination into execution.



Amen! You can spell neither eudaimonia nor anxiety without AI. Whether it leads us to the former or the latter depends entirely on how we make use of it.
I think AI is the same story we have seen before. Just a new chapter.
People are stuck arguing about whether AI is “just ChatGPT.” That argument alone tells you who is going to struggle. This feels exactly like the late 90s when people said they did not need the internet because they already had a fax machine.
AI is not ChatGPT. ChatGPT is just one tool you can see and touch. AI is the engine underneath. It is already helping businesses move faster, make better decisions, and save time. Most of it is boring and behind the scenes, which is why people miss it.
Here is the good news. AI is coming no matter what, and that is a massive opportunity. You do not have to be perfect. You just have to start. Try things. Use the tools. Make mistakes. Learn faster than the person next to you.
The people who win are not the smartest. They are the ones who lean in early, stay curious, and keep moving forward.