Discussion about this post

User's avatar
The One Percent Rule's avatar

Hannah Fry has a fascinating conversation with Demis Hassabis, including his thoughts on consciousness:

50/50 Strategy for AGI: Hassabis describes Google DeepMind's approach as split evenly between two pillars. Half of their effort goes into "scaling" existing architectures, while the other half focuses on "innovation" and fundamental research to discover the new breakthroughs required for Artificial General Intelligence (AGI).

The "Jagged Intelligence" Paradox: He highlights a current limitation where AI models can win gold medals in the International Math Olympiad yet fail at basic high school math. He attributes this inconsistency to a lack of "thinking time" or reasoning capabilities, estimating that the field is only about "50% of the way" to solving these reliability issues.

From AlphaGo to AlphaZero for LLMs: Current Large Language Models (LLMs) function like the original AlphaGo by learning from human knowledge (the internet). Hassabis argues the next major step is to create an "AlphaZero" moment for LLMs, where systems move beyond human data to learn from first principles, self-play, and continuous online learning.

World Models are Critical: He emphasizes that language alone is not enough to describe the physical world. DeepMind is heavily investing in "World Models" (like Genie) that understand spatial dynamics and physics. This understanding is a prerequisite for building useful robotics and universal assistants that can operate in daily life.

Scientific "Root Node" Problems: Building on the success of AlphaFold (which he views as a proof of concept), DeepMind is applying AI to other fundamental "root node" scientific challenges. He specifically mentions efforts in material science, battery design, and a partnership with Commonwealth Fusion to accelerate nuclear fusion energy.

Consciousness and Computability: Hassabis frames the question of consciousness around the limits of a Turing machine. He explores whether the human mind is fully computable (classical information processing) or if it requires something non-computable, like the quantum effects suggested by Roger Penrose. While he personally leans towards the view that the universe and mind are computable information processes, he remains open to being proven wrong by physics.

Comparing AGI to Human Minds: He suggests that building AGI acts as the ultimate experimental test for consciousness. By building a complete "simulation of the mind" (AGI) and comparing it to the human brain, we can identify the differences. These remaining discrepancies might reveal the true nature of uniquely human traits like dreaming, emotions, and consciousness itself.

Simulating Evolution: A long-standing passion for Hassabis is using AI to simulate evolution and social dynamics. He envisions running large-scale simulations with millions of agents to study the origins of life and consciousness statistically, effectively "rerunning" evolution in a controlled sandbox to see how intelligence and social structures emerge.

Post-AGI Economics and Society: He speculates that the arrival of AGI will require a total reconfiguration of the economy, potentially more significant than the Industrial Revolution. He suggests we may need systems beyond Universal Basic Income (UBI), such as new forms of direct democracy where resources or voting credits are distributed differently in a post-scarcity world.

The Risk of Autonomous Agents: While optimistic long-term, Hassabis expresses worry about the next 2 to 3 years. He is concerned about the rise of "agentic" systems that can act autonomously on the internet. He notes that DeepMind is actively working on cyber defense measures to prepare for a web populated by millions of independent AI agents.

https://www.youtube.com/watch?v=PqVbypvxDto

Hollis Robbins's avatar

I had a big fight with Gemini this morning, which just wouldn't follow my global rules. So I finally sat it down and asked it what was wrong and it said (after a lot of evasion): "Changing your prompt to define the output format rather than just the topic will force the model to bypass its default "helpful assistant" templates. The error comes from the model trying to be "scannable." You must explicitly demand it be "dense."

27 more comments...

No posts

Ready for more?