Nobel Laureate Hassabis Warns: AlphaGo Has Awakened, AGI Is Taking Over Scientific Research

Ten years ago, AlphaGo's divine move silenced professional Go players; ten years later, Demis Hassabis says: At that moment, we knew AI was ready to tackle truly important matters. From protein folding to International Mathematical Olympiad gold medals, the technological path pioneered by AlphaGo is reshaping science itself.

March 2016, Seoul.

World Go champion Lee Sedol faced an opponent that was a computer program.

In the second game, move 37 was played—a move no professional player would ever consider.

Commentators initially thought it was an operational error, but after more than a hundred moves, AlphaGo won.

That night, 200 million people watched the live broadcast, and silence filled the venue for nearly a minute.

Ten years later, reflecting on this history, Demis Hassabis said that at the moment AI played "move 37," he realized: the technology was ready.

Not ready to win at Go, but ready to tackle genuine scientific challenges.

That judgment now appears completely correct.

Image
Billions of Positions, Not a Single Exhaustive Search

The Go board has 10170 possible positions, a number far exceeding the total count of atoms in the universe.

Traditional exhaustive search with pruning had already reached its limits in chess and was utterly inadequate for Go.

AlphaGo combined deep neural networks, reinforcement learning, and Monte Carlo tree search.

It first learned from human game records which moves were reasonable, building an initial intuitive model.

Then, through hundreds of thousands of self-play games, the reinforcement learning mechanism continuously strengthened strategies with higher win rates.

Finally, during actual gameplay, it searched only the most valuable branches.

Image

The essence of this combination is replacing rules with learning and brute force with search, allowing AI to emerge with strategies surpassing human experience through accumulated experience.

After AlphaGo, DeepMind continued advancing.

AlphaGo Zero completely discarded human game records, starting from random moves and learning through self-play, ultimately becoming the strongest player in history.

Next came AlphaZero. Using the same system, it learned chess from scratch within hours and defeated Stockfish, the strongest dedicated chess engine at the time, while also producing strategies never seen before by human players.

Hassabis summarized this history in one sentence:

This proved the methodology was correct; it was time to apply it to the real world.

Image
From the Go Board to the Laboratory: Technology Transfer of AlphaGo

AlphaGo demonstrated that AI could find solutions humans had never reached through learning plus search.

This approach has been directly and powerfully transferred to scientific fields.

Image
Protein Folding

This has been a challenging problem studied by humans for 50 years.

Proteins fold from amino acid sequences into three-dimensional structures, which determines their function.

Understanding this structure is crucial for conquering diseases and developing new drugs, but the computational prediction load is enormous.

Image

In 2020, AlphaFold 2 solved this problem.

Subsequently, DeepMind predicted the structures of all 200 million known proteins, placed them in an open-source database, and made them freely available globally.

Currently, over 3 million researchers worldwide are working with the AlphaFold database.

In 2024, Demis Hassabis and John Jumper received the Nobel Prize in Chemistry for this achievement.

Image
Mathematical Reasoning

This is the most direct inheritance direction from AlphaGo.

Image

AlphaProof uses language models combined with AlphaZero's reinforcement learning and search algorithms to learn proving formalized mathematical statements.

Essentially, it employs the same framework as AlphaGo's "finding optimal solutions," except the search space shifts from the Go board to the space of mathematical propositions.

In 2025, AlphaProof and AlphaGeometry 2 jointly achieved silver-medal level performance for the first time in the International Mathematical Olympiad (IMO).

Later, Gemini Deep Think went even further.

Using a method inspired by AlphaGo, it won a gold medal at the 2025 IMO.

Image
Algorithm Discovery

AlphaEvolve represents this direction.

Just as AlphaGo searches for the next optimal move, AlphaEvolve searches for "the next more efficient algorithm."

It discovered a new method for matrix multiplication. This fundamental operation drives almost all modern neural networks and had been studied for decades; AlphaEvolve found a solution humans had never discovered.

Hassabis called this "AlphaEvolve's move-37 moment." Currently, it is being used to optimize data centers and quantum computing problems.

Image
Scientific Collaboration

AI co-scientist systems embed AlphaGo's debate-style search principles into the scientific research process.

They enable multiple AI agents to "debate" scientific hypotheses, filtering for the most valuable directions.

In validation research at Imperial College London, this system analyzed decades of literature and independently derived the same antibiotic resistance hypothesis that researchers had spent years verifying.

Image
Ten Years Later: AlphaGo's Legacy Continues in Gemini

The methodology proven by AlphaGo is now operating within Gemini.

The reasoning mechanisms of the latest-generation Gemini models utilize search and planning techniques pioneered by AlphaGo and AlphaZero.

Gemini was designed from the outset to be multimodal. Instead of converting images and audio into text for processing, it directly builds understanding of the world across multiple modalities simultaneously.

In Hassabis's vision, the path to AGI requires three elements working together: the world model provided by Gemini, AlphaGo-style search and planning capabilities, and coordinated invocation of specialized tools like AlphaFold.

Only when combined do these three constitute "truly general" AI.

He also mentioned a higher-dimensional standard in his article:

True AGI isn't just about finding strategies in Go that humans never imagined; it's about inventing a game as profound, elegant, and worthy of centuries of human study as Go itself.

The gap between these two accomplishments is roughly the distance between "finding answers" and "posing questions."

Current AI has advanced far in the former, but no one knows how long the latter will take.

At the end of the article, Hassabis quoted Lee Sedol himself.

The world champion once defeated by AlphaGo, now an adjunct professor at Ulsan National Institute of Science and Technology in South Korea, evaluated that match as follows:

I believe the most important revelation provided by AlphaGo was a decisive preview of the AI era—proving this isn't some distant, vague future, but a reality that is arriving. It's like a roadmap from the future, sending a clear signal to humanity: the world is changing.

Ten years have passed, and DeepMind has already reached many milestones on that roadmap.

From protein folding to mathematical gold medals, from algorithm optimization to AI-collaborative scientific research, AlphaGo's technological legacy has spilled beyond the Go board, permeating the very operational methods of science itself.

Where will the next "move 37" occur?

Hassabis provided no answer, but he said the goal is already on the horizon.

References:
https://x.com/GoogleDeepMind/status/2031399096267718847

ImageImage


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.