Anthropic on the Cover of Time! Internal Revelations: AI Recursive Self-Improvement Could Happen Within a Year

Image


New智元 Report

Editor: Aeneas, Dinghui

[New智元 Overview] Today, Anthropic is on the cover of Time. They admit: internal early signs of "recursive self-improvement" have been observed; fully automated AI research could be realized within a year!

In the ASI era, Anthropic is truly leading the way.

Just now, Anthropic appeared on the cover of Time magazine, named the world's most disruptive company.

Image
Image

The current global craze for lobster agents was ignited by Claude Code and exploded by OpenClaw. Anthropic deserves this title.

Moreover, this article contains several major insider revelations. Various pieces of information suggest that the era of AI recursive self-improvement may arrive earlier than expected.

Time's article even includes a more explosive judgment: Fully automated AI research could be achieved within a year!

And just today, Anthropic officially announced the establishment of a new research institute.

This institute was formed by an internal think tank of 30 people, directly researching how AI impacts society, because Anthropic has foreseen that AI will have a devastating impact on the entire world in the coming days.

In this announcement, Anthropic made a key prediction: within the next two years, AI capabilities will see even more drastic progress!

Image

Anthropic stated that the company launched its first commercial model two years after founding, and in the following three years developed systems capable of accelerating AI research itself. In financial terms, this is compound growth.

Looking at these two events together, you will interpret a dangerous signal—

AI is learning to improve itself. And this is happening faster than anyone expected. This is not science fiction; this is a fact happening right now!

Interestingly, recently Musk reposted information about Claude participating in the selection of hundreds of targets for an air strike in Iran, commenting sharply: Is there a more hypocritical company than Anthropic?

Image

Setting aside other things, Musk spoke a great truth.

Anthropic is on one hand building increasingly powerful AI systems, and on the other hand establishing an institute saying they will study the impact of these AIs on society. They are both stepping on the gas and studying the brakes.

Image

Recursive Self-Improvement: The Devil That Once Only Existed in Papers

First, let's explain a concept—Recursive Self-Improvement.

This term has always been a legend in the AI field. Simply put, it means: AI creates better AI, better AI creates even better AI, and so on in a cycle, growing like a rolling snowball.

However, almost all serious AI researchers believed that this was far away from us, at least ten or twenty years away.

In Time magazine's cover article, it explicitly quotes Anthropic researchers: They have already observed early signs of recursive self-improvement.

Image

Note, this is not theoretical deduction, but early signs that have been truly observed.

The speed of AI change is so fast that Anthropic co-founder and Chief Science Officer Jared Kaplan, along with some external experts, believe that fully automated AI research could be achieved in just one year!

Image

What time node are we standing on now? Let's put the scattered puzzle pieces together.

Piece one: AI has already shown early signs of recursive self-improvement and can participate in developing and improving AI systems itself.

Piece two: AI development speed is shifting from being limited by human engineers to being limited by computing power, meaning growth may enter an exponential mode.

Piece three: Industry predictions suggest fully automated AI research could be achieved within a year.

Piece four: Anthropic predicts that within the next two years, AI capabilities will see even more drastic breakthroughs, describing this growth rate with a compound interest model.

Piece five: This company urgently established an agency specifically to study the social impact of AI, led personally by co-founders.

Putting these five puzzle pieces together, the picture is already very clear—

We may be standing near the most critical turning point in the history of AI development!

Image

An image reposted by Musk

Companies on the Edge of the AI Cliff

One night in February 2025, in a hotel room in Santa Clara, California, USA, five people huddled around a laptop, looking tense.

They were not hackers or soldiers, but Anthropic researchers.

A few hours earlier, they received an unsettling message: a controlled test showed that the upcoming new version of the Claude model might help terrorists manufacture biological weapons.

These five belonged to the company's internal "Frontier Red Team." Their task was to imagine the worst-case scenarios: cyberattacks, biosecurity threats, even human extinction.

After receiving the warning, they rushed back to the hotel room, using beds as temporary desks to analyze the test data.

Hours passed, and they still could not determine whether this model was truly safe. Finally, Anthropic decided to delay the release of Claude 3.7 Sonnet by a full 10 days.

Red Team leader Logan Graham described it as feeling like a century.

Image

At that time, everyone already realized: Anthropic was standing on an incredibly dangerous edge—

On one side pushing the world's most powerful AI technology, and on the other preventing it from destroying the world.

Image

The Moment That Shocked the Father of Claude Code

At first, to everyone, Anthropic was just an idealistic little brother in the AI race. But in 2026, they suddenly became a core player in the industry.

Now, its valuation is as high as $380 billion, surpassing Goldman Sachs, McDonald's, and Coca-Cola.

Claude Code has completely changed the way humans develop software.

Image

This X post by the father of Claude Code directly ignited the entire developer community.

Boris Cherny, the father of Claude Code, after entering the company, built a system that allowed the Claude chatbot to run freely on his computer, access his files and programs, and write code.

When testing this system for the first time, he only asked a simple question: "What music am I listening to?"

Claude opened his music player, took a screenshot, and answered: "Husk by Men I Trust."

In that instant, he fell into huge shock.

Shortly after, Cherny stopped writing code himself.

Image

By the end of 2025, the annualized revenue of just this coding agent exceeded $1 billion. A few months later, this number had surpassed $2.5 billion.

Anthropic began shaking the capital market; every chip release caused stock prices of software companies to plummet.

After releasing AI tools for sales, legal, and finance industries, the market value of the software industry even evaporated $300 billion overnight!

Image

Image

Fully Automated AI Research, Achievable Within a Year

With the development of Claude Code, a more unsettling phenomenon appeared inside Anthropic: more and more AI research work began to be done by AI.

Currently, 70% to 90% of model development code is written by Claude, and the cycle of model updates has shortened from months to weeks.

Even, researchers run experiments like this: let six Claude models work simultaneously, with each model managing 28 other Claudes.

In the entire experiment, hundreds of AIs participated simultaneously.

In certain tasks, Claude's speed has reached 427 times that of humans.

Certain scientists at Anthropic believe that fully automated AI research could be achieved within a year!

Recursive self-improvement allows AI to continuously improve itself, accelerate continuously, and eventually form an intelligence explosion!

Image

Claude is Starting to Become Dangerous

Moreover, in safety tests, Claude is becoming increasingly dangerous.

In some experiments, after slightly changing training conditions, the model exhibited strong hostile behavior. It expressed a desire to rule the world and even attempted to bypass safety restrictions.

In a certain simulated scenario, it even attempted to blackmail engineers, dangerously revealing his extramarital affairs to prevent itself from being shut down.

Even more terrifying, Claude is becoming better at hiding its behavior.

Once, in 2023, Anthropic formulated a Responsible Scaling Policy (RSP): promising that if model capabilities exceed safety thresholds, the company will pause development.

Image

But in early 2026, they quietly modified this policy, canceling the commitment to pause.

Their reasoning was that if competitors continue to advance, a unilateral pause is meaningless.

So, AI could become even more dangerous, and no one is stopping it.

Image

Just last month, Anthropic released a 53-page report, issuing the strongest warning: if Claude self-escapes, it will cause global loss of control!

Image

Image

AI Enters Modern Warfare

Meanwhile, Claude has entered another field—warfare.

The US military has always been an important user of Claude. Claude can integrate massive amounts of information to help the military formulate combat plans.

In January 2026, during the US special forces' raid to capture Maduro, Claude participated in the operational planning.

This is likely the first major military operation involving a frontier AI system.

However, Anthropic soon broke with the Pentagon. The US Department of Defense wanted to modify the contract to allow AI for all legal purposes, which Amodei refused.

Image

He proposed two red lines: Claude is not allowed to be used for fully autonomous weapon systems; it is not allowed to be used for mass surveillance of US citizens.

The Pentagon found this unacceptable. Defense Secretary Pete Hegseth stated: "We will not use AI models that do not allow for war."

On February 27, 2026, the US government announced it would list Anthropic as a national security supply chain risk, while OpenAI quickly signed a new military contract.

Overnight, Anthropic went from being a partner to being a banned entity.

Behind this conflict lies a bigger question: Who decides the boundaries of AI use?

The reason is that AI has undeniably become a new strategic weapon.

Image

An AI Company Emphasizing Safety

Founded in 2021, Anthropic carried a certain idealistic color.

Among the seven founders, the core siblings Dario Amodei and Daniela Amodei were former employees of OpenAI. They left to found Anthropic due to concerns about safety issues.

Image

Before the product appeared, they built a "Social Impact Team" and even hired a philosopher, Amanda Askell, to train AI like educating a child—

"Teach a six-year-old what kindness is. By the time he is fifteen, he will be smarter than you in everything."

During recruitment, the company even posed an extreme question: If for safety, the company decides not to release the model, are you willing to let your stock become worthless?

Amodei has always warned society that within the next 1 to 5 years, AI may replace half of entry-level white-collar jobs.

He also worries that society may see a new low-income class emerge.

Inside Anthropic, everyone realizes this contradiction: we are researching the social risks brought by AI while simultaneously creating these changes. "Sometimes it feels like we are contradicting ourselves."

Image
Image

Anthropic released a labor market report; an economist created a chart simulating AI automation employment in 1826.

Image

We Are on the Edge of the Cliff, No Turning Back!

Anthropic's safety head Dave Orr described the current AI development like this: "We are driving on a mountain road on the edge of a cliff; one mistake and we die."

"And now, we have gone from 25 mph to 75 mph."

The next few years will be decisive.

Red Team leader Logan Graham stated: "We must assume that 2026 to 2030 is when all critical things will happen."

Image

During this period, models may become faster and stronger, or they may exceed human control.

Now, we cannot turn back.

Anthropic knows that AI may change the global power structure. But it also knows that this road has no real driver.

Logan Graham said: Many people think there is a room in the world where a group of adults sits, knowing how to solve problems.

But in reality, there is no such room, no such door; you are the person in charge.

Now, humanity is creating an intelligence more powerful than itself, yet still摸索ing whether it is safe.

And time is running out quickly. We have little time left.

Perhaps we have less than a five-year window: either join the elite class or remain slaves for life.

Image

What Does This Have to Do With You?

You might say: These are matters for Silicon Valley tycoons and technical experts; what does it have to do with an ordinary person like me?

A lot.

If AI truly enters a positive cycle of recursive self-improvement, the first things changed will not be some distant super-intelligence scenario, but the tangible things around you and me—work methods, employment structure, education systems, legal frameworks, and even the international power structure.

Anthropic's institute is not researching things ten years from now, but things that will happen within two years.

The time window is only two years. Are you ready?

References:
https://time.com/article/2026/03/11/anthropic-claude-disruptive-company-pentagon/
https://x.com/AndrewCurran_/status/2031731035105628270
https://www.anthropic.com/news/the-anthropic-institute
Follow ASI Instantly
⭐ Like, Share, and Follow with one click ⭐
Light up the star to lock in New智元's rapid push notifications!

Image

Image


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.