Just Now, Google AI Cracks the 'Alien' Puzzle! Breaks 10-Year Record, Self-Written Algorithms Stun Nobel Laureates

Image

New Smart Yuan Report  

Editor: Haokun Aeneas

[New Smart Yuan Introduction] Google DeepMind has done it again: AlphaEvolve autonomously writes algorithms, rewriting lower bounds for 5 classic Ramsey numbers in one go, breaking a math record that stood for a decade! Nobel laureate Hassabis and Turing Award winner LeCun have both praised it—AI is fundamentally changing the way math breakthroughs happen!

Just moments ago, Google DeepMind has pulled off another miracle.

Their system, AlphaEvolve, has made a breakthrough in extremal combinatorics, improving the lower bounds of five classic Ramsey numbers all at once!

Image

Paper Address: https://arxiv.org/pdf/2603.09172

What is even more surprising is that this was not achieved by manually designing five different algorithms, but by using a single unified "meta-algorithm".

Image

Deriving these search algorithms is extremely difficult, with the best results dating back at least ten years. This time, DeepMind let large models autonomously write algorithms, directly leveling a ten-year-old mathematical record!

Image

After the DeepMind researchers announced the good news, CEO Hassabis also quickly reposted to celebrate. He stated that AlphaEvolve's achievement is another major milestone for AI in the field of mathematics!

Image

Even Turing Award winner LeCun took the initiative to congratulate the team.

Image

How difficult is the Ramsey number problem exactly? It can be said that it makes even the greatest mathematicians despair.

Famous mathematician and Terry Tao's mentor Paul Erdős once said that if aliens threatened Earth: calculate the Ramsey number R(5,5) within a specified time, or destroy humanity, then humanity's most rational choice might be to surrender.

Image

For decades, Ramsey numbers have been one of the most stubborn problems in combinatorial mathematics. To make progress on a specific Ramsey number, mathematicians usually have to design a specialized search algorithm from scratch.

But now, AlphaEvolve has broken all five!

Image

And this is just the tip of the iceberg of AlphaEvolve's capabilities.

Previously, it had already broken the 56-year-old matrix multiplication record, optimized Google data center scheduling, and discovered structural simplification schemes in AI chip design that human engineers had not noticed.

When a system capable of automatically discovering algorithms can also optimize the computing infrastructure that trains itself, we are undoubtedly witnessing the birth of a new species.

Image

Image

Numbers Even Aliens Can't Calculate

About one hundred years ago, British logician Frank Ramsey proved the following conclusion.

In a group of six people, regardless of the relationships among these six people, one can always find three people who all know each other, or three people who are all strangers to each other.

This simple and intuitive example is the earliest prototype of Ramsey theory.

Image

In graph theory, R(r,s) is defined as the smallest integer n such that any undirected graph containing n vertices necessarily satisfies at least one of the following conditions:

  • There exists a complete subgraph (clique) with r vertices

  • Or there exists an independent set with s vertices

For example, R(3,3) = 6.

This means that any graph with 6 vertices must contain either a triangle or three unconnected points.

Image

Currently, for some small-scale parameters, such as (3,s) s ≤ 9, (4,s) s = 4,5, these Ramsey numbers have been precisely calculated.

But for a large number of parameters, there is still a huge gap between upper and lower bounds.

How do we get the lower bounds?

If we can construct a graph with n vertices that contains no r-clique and no s-independent set, then it shows: R(r,s) ≥ n + 1.

Image

Image

Five Ten-Year Records, Leveled in One Go

This time, DeepMind researchers published a paper announcing that AlphaEvolve has refreshed the lower bounds of five classic Ramsey numbers in one go.

  • R(3,13): 60 → 61 (Previous record stood for 11 years)

  • R(3,18): 99 → 100 (Previous record stood for a full 20 years)

  • R(4,13): 138 → 139 (Previous record stood for 11 years)

  • R(4,14): 147 → 148 (Previous record stood for 11 years)

  • R(4,15): 158 → 159 (Previous record stood for 6 years)

Image

Although each number was only pushed up by 1, in the field of Ramsey numbers, pushing by 1 is more difficult than pushing by an order of magnitude in many mathematical fields.

More critically, all 5 breakthroughs came from the same system.

In addition to these 5 new records, it also matched the lower bounds of all Ramsey numbers with known exact values, as well as the current best lower bounds for many other values, covering a total of 28 R(r,s).

Image

Image

AlphaEvolve

Doesn't Solve Problems, It Solves Algorithms

So how did AlphaEvolve do it?

Simply put, it is an agent that uses large language models to evolve code. The object of its search is the search algorithm itself.

This is the most critical point to understand this work.

Image

Traditional path:

Human experts manually design a search algorithm (e.g., simulated annealing, tabu search) based on mathematical intuition and domain knowledge, then let the computer run it. Whether the algorithm is good depends entirely on the person's skill.

AlphaEvolve has automated this step. Its workflow is as follows:

Step 1: Maintain an "Algorithm Population".

At first, there is only one simplest baseline program (maybe just returning an empty graph).

Step 2: Use LLM to mutate code.

Select a well-performing algorithm from the population, and let the large language model (Gemini) "mutate" the code—modify the search strategy, change the initialization method, add new heuristic rules.

Step 3: Execute and score.

Run the mutated program to see how large a valid graph it can construct. If the size of the graph exceeds the current best record, give a high score; if the graph is close to valid (few violations), also give a certain reward—this is to guide the search to push towards the boundary.

Step 4: Evolutionary iteration.

Put the high-scoring algorithm back into the population, and continue selecting, mutating, and evaluating. Cycle repeatedly.

This is called "meta-search"—searching in the algorithm space, not the graph space.

In other words, AlphaEvolve's final output is not a graph, but a piece of code that can find good graphs.

Image

Image

Four Major Algorithm Families Invented by AI

The paper showcases the specific search algorithms AlphaEvolve discovered for 28 R(r,s) values. These algorithms vary greatly in style but can be grouped into four families.

First Family: Random Start, Violent Annealing

The most primitive path. Start from a random graph and use simulated annealing to gradually eliminate violation structures (triangles or overly large independent sets).

R(3,5), R(3,7) and other smaller values use this type of method.

Simple, but not enough for large-scale problems.

ImageImage

Second Family: Algebraic Structure Seeding

Use algebraic graphs with deep mathematical backgrounds, such as Paley graphs, quadratic residue graphs, and cubic residue graphs, as starting points, then perform local optimization.

These graphs themselves have good properties close to Ramsey constraints, equivalent to giving the search a "high starting point".

The breakthrough for R(4,13) belongs to this family—starting from the cubic residue Cayley graph on the finite field F₁₂₇.

Third Family: Circulant Graphs + Sum-Free Set Guidance

Start from Circulant Graphs, using the algebraic properties of sum-free sets to naturally guarantee no triangles, then perform incremental expansion.

The breakthroughs for R(3,13) and R(3,18) belong to this family. This is also the family that produced the most new records.

ImageImage

Fourth Family: Hybrid/Fractal/Spectral Methods

The most complex type, combining fractal self-similar constructions, spectral property analysis, parallel execution of multiple heuristics, etc.

The algorithm for R(3,11) used a combination of fractal initialization + adaptive energy field + dynamic temperature adjustment.

Interestingly, each algorithm is highly targeted to specific (r,s) values.

In other words, AlphaEvolve did not find a universal "master algorithm", but rather "tailor-made" a search strategy for each problem.

However, this customization process is fully automatic.

Image

Image

How Ingenious Are the Algorithms Designed by AI?

Let's take a close look at the details of a groundbreaking algorithm.

R(4,15): Harmonic Memory + Spectral Initialization + Toxic Orbit Tunneling.

For this problem, AlphaEvolve designed a mechanism called "Harmonic Genetic Memory"—

Maintain a global memory pool across generations, recording which edges and which cyclic distances frequently appear in successful graphs.

During initialization, it either uses algebraic constructions like generalized Paley graphs, or expands the existing best graph based on the probability distribution of harmonic memory.

The local search phase uses a combination of tabu search and simulated annealing, but when deleting an edge to eliminate a 4-clique, it doesn't just look at the direct effect, but also evaluates the risk of creating a large independent set after deletion, and adds "harmonic" protection points to historically well-performing edges.

The most amazing part is an escape mechanism called "Harmonic Tunneling".

If the search gets stuck (there are still 4-cliques left to eliminate), the algorithm finds a "toxic orbit". That is, the cyclic distance d* that appears most frequently in the remaining 4-cliques, and then flips all edges at distance d* at once.

This is an extremely violent operation, equivalent to performing major surgery on the graph, directly jumping out of the current local optimum.

Human experts would find it hard to conceive of such a strategy combination. But AlphaEvolve figured it out on its own through repeated mutation and evolution.

Image

Image

Human Experts, Utterly Defeated

The algorithms generated by AlphaEvolve are strong enough, as we have seen.

But a sharper question is: compared with algorithms manually designed by top human experts, where does it win?

The three sets of comparisons given in the paper are increasingly impactful.

R(4,13): Same starting point, AI wins in the second half.

The previous best record was created by Exoo and Tatarevic in 2015. Their algorithm started from the cubic residue graph and used standard simulated annealing for optimization.

AlphaEvolve chose the same starting point (cubic residue graph on F₁₂₇), but the subsequent strategy was completely different—

Two-stage filtering, perturbation-based vertex expansion, and tabu search with "strategic kicking" (forced random jump after 500 steps of stagnation).

Image

R(4,14): AI even changed the starting graph.

Exoo used cubic graphs for initialization in 2015. AlphaEvolve directly chose circulant graphs, exploring in a completely different search space using random orbit sampling and high-performance bit operations for acceleration.

Image

R(4,10): AI invented a strategy that doesn't exist in human literature.

The previous best was obtained by Harborth and Krause in 2003 using branch-and-bound search.

In contrast, AlphaEvolve's algorithm samples from circulant graph distributions, combining tabu search with adaptive probabilities and an approximate hit warehouse mechanism.

The authors of the paper state bluntly: "This search strategy has no precedent in existing literature."

Image

In summary, the algorithms generated by AlphaEvolve have three significant characteristics:

1. Knows how to use known algebraic structures.

AlphaEvolve does not search randomly, but chooses "good starting points" like Paley graphs and circulant graphs. This shows that the LLM implicitly applies domain knowledge of graph theory and algebra during the code mutation process.

2. Likes to chain multiple heuristics.

Human algorithms often use a single strategy (e.g., pure simulated annealing), while AlphaEvolve often chains multiple methods together—first tabu search, then annealing, then genetic algorithms, then another round of local repair.

3. Self-created fast approximate counting methods.

Accurately calculating all 4-cliques and 13-independent sets on large graphs is very time-consuming. AlphaEvolve's algorithms use tricks like bit operation acceleration, heuristic filtering, and incremental updates to significantly reduce computational overhead.

Image

AI is Rewriting the Rules of Scientific Discovery

Hassabis once summarized the significance of AlphaEvolve like this:

Knowledge begets more knowledge, algorithms optimize other algorithms—the flywheel is accelerating.

Gemini generates code, code evolves into better algorithms, algorithms optimize Google's infrastructure, more efficient infrastructure accelerates Gemini's training, stronger Gemini generates better code...

At this point, AI is already embedded in the infrastructure of self-improvement.

And this breakthrough with Ramsey numbers pushes the story one step further.

Matrix multiplication has clear optimization goals and continuous evaluation criteria; Ramsey numbers are different. Its search space is discrete and combinatorially explosive, with no gradient to climb.

Finally, the paper also admits an important limitation:

It can currently only handle "lower bounds", that is, constructing counterexamples. The "upper bounds" of Ramsey numbers require proving that graphs above a certain size cannot satisfy constraints, which cannot be solved by finding an example.

AlphaEvolve cannot take this path for now. However, on the path it can take, no one has gone further than it.

Ten years ago, AlphaGo played Move 37, proving that AI can surpass human intuition in specific fields.

Five years ago, AlphaFold solved the problem of protein structure prediction that had plagued the biology community for 50 years.

AlphaEvolve simply jumped out of the fixed track—it started inventing rules itself.

References:

https://x.com/demishassabis/status/2032267485735460867

https://x.com/pushmeet/status/2031727892346941499

https://arxiv.org/pdf/2603.09172

https://x.com/aakashgupta/status/2032326637577257345


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.