New Intelligence Report
Editor: Editorial Department
【New Intelligence Report】 At the World Economic Forum in Davos, two AI giants shared the stage for a rare conversation about the future of AGI. Dario Amodei made a startling prediction: AI will completely replace software engineers in as little as 6 months! Meanwhile, half of all entry-level white-collar jobs could disappear within 1-5 years.
Davos, a gathering place for global elites, saw the CEOs of Anthropic and Google DeepMind appear together again.
This time, they sat together to discuss a topic that is both exciting and unsettling—the first day after AGI arrives.
Unlike their meeting in Paris last year, their expectations now carry a sense of urgency, as if it's really coming soon.
During the half-hour fireside chat, Dario Amodei dropped a "nuclear bomb" that shocked everyone—
AI's end-to-end takeover of almost all software engineers' (SWEs) work is just 6-12 months away!
He also revealed that at Anthropic, engineers barely write code by hand anymore; it's all done by AI, with humans only responsible for review and guidance.
Dario Amodei and Demis Hassabis almost simultaneously acknowledged that the path to AGI is becoming clearer.
As model parameters continue to scale, multimodal capabilities grow stronger, and agents become more autonomous—these factors combined mean AGI is approaching, and it's only a matter of time.
Here are the main highlights of the interview, with all the core viewpoints—
Dario Amodei:
By 2026/2027, AI models will reach "Nobel-level" capabilities in multiple fields; 50% of white-collar jobs will disappear within one to five years.
AI writing code → better AI → faster iteration—this cycle is accelerating and closing.
Anthropic's revenue has increased a hundredfold over three years, showing exponential growth.
If AI writing AI can achieve a perfect closed loop, it will lead to a miraculous, extremely rapid explosion.
Demis Hassabis:
There is a 50% probability of achieving AGI by the end of this decade (before 2030).
There will be short-term pain, but new jobs will emerge in the long term; the timeline for AI replacing humans is extended to 3-5 years.
Google DeepMind has regained its startup spirit and is reclaiming the industry's leading position.
If the physical world or hardware becomes a bottleneck, the development curve will be flatter, leaving more time for humans to adapt.
The Two Giants Debate AGI and AI's Self-Evolution Closed Loop
Regarding when AGI will arrive, the two experts gave their respective predictions.
Dario Amodei is not just aggressive; he's "racing ahead." Even standing at the threshold of 2026, he firmly bets that by 2026 or 2027, models will inevitably emerge that reach "Nobel-level" capabilities in many fields.
"I don't think the results will deviate much."
His confidence comes from a "cycle" that is accelerating toward closure. One envisioned mechanism is—
AI writing code → AI conducting research → a completely self-iterating closed loop.
Dario made a judgment that shook the AI community:
Once this positive feedback loop truly runs smoothly, R&D speed will take off directly, sprinting exponentially.
Compared to Dario's aggressiveness, Demis Hassabis's stance is relatively stable, but he has hardly retreated.
He holds to the baseline from last year: a 50% probability of achieving AGI by the end of this decade (before 2030)—AI that exhibits all human cognitive capabilities.
Why is he more conservative than Dario? Hassabis points to a "physical barrier" that cannot be easily crossed by code alone.
Over the past year, significant changes have occurred in programming and mathematics, but progress in automating natural sciences is a completely different matter.
It requires real-world experimental validation, and this is precisely the link where AI cannot yet achieve a "closed loop." Hassabis said the more difficult part lies at the level of scientific creativity.
Google DeepMind will eventually create AGI, but it currently lacks one or two "key pieces."
Here, he mentioned a key variable—
The self-evolution closed loop can truly run without deep human involvement. If this closed loop truly forms, the pace of progress will far exceed current expectations.
AI Replaces "All" Programmers
Dario gave the most intuitive and chilling example—
Anthropic's engineers barely write code by hand anymore.
They now act more like product managers or architects. That is, they only need to propose requirements, edit the code generated by the model, and control the overall architecture.
In Dario's view, we are only 6-12 months away from models "end-to-end" completing most, or even all, of the work of software engineers.
What does "end-to-end" mean here?
In the English context, SWEs (Software Engineers) are not just coders. "End-to-end" covers the entire lifecycle of a software product: requirements, design, front-end, back-end, deployment, testing, etc.
From this perspective, Anthropic has already achieved AGI for software development (after all, their employees have unlimited access to Claude).
To quantify this capability, let's look at SWE-Bench (Software Engineering Benchmark).
This is a "practical exam" that evaluates a model's ability to locate issues in real GitHub codebases, make cross-file modifications, run tests, and deliver CI patches.
The original set contains approximately 2,294 tasks. The commonly cited Verified version is a simplified subset manually annotated by OpenAI.
In a standard Bash-only environment, Claude 4.5 Opus has a resolution rate of 74.4%, with a cost of only $0.72 per problem.
Among these problems, we can break down the difficulty as follows:
Simple tasks (<15 minutes): About 196 tasks, such as adding assertions to functions.
Medium tasks (15 minutes - 1 hour): Small-scale changes that require some thinking time.
Difficult tasks (1-4 hours): Substantive rewrites involving functions or multiple files.
Extremely difficult tasks (>4 hours): Requiring extensive research, with changes to 100+ lines of code for deep problems.
If we map the difficulty of SWE-Bench to real-world tech company job levels, the situation is even more alarming:
Simple to medium tasks (<1 hour) are equivalent to junior engineer levels (Junior/SDE1).
Equivalent to: Google L3, Alibaba P5-P6, ByteDance 1-1/1-2 levels, with 0-3 years of experience.
Difficult tasks (1-4 hours) are equivalent to mid-to-senior engineer levels (Senior/SDE2-SDE3).
Equivalent to: Google L4-L5, Alibaba P6-P7, ByteDance 2-1/2-2 levels, with 3-7 years of experience.
These tasks are not just single-file modifications; they require cross-file changes, with an average of 32.8 lines of code changed and involving 1.7 files.
Extremely difficult tasks (>4 hours) are equivalent to senior/expert engineer levels (Staff+).
Equivalent to: Google L6+, Alibaba P7-P8, ByteDance 3-1 levels and above.
Currently, top AI models find it very difficult to solve such problems.
Although top AI models currently struggle with these "extremely difficult tasks" that require extremely complex context understanding and architectural design to solve—
AI still seems somewhat powerless.
But don't forget Dario's astonishing prediction: 6-12 months.
When the "AI writing AI" flywheel starts spinning wildly, evolving from L3 to L6 might only take a few model iterations.
The once-insurmountable "expert-level" moat is drying up at a visible speed.
50% of Entry-Level Jobs to Disappear Within Five Years
When the technology flywheel turns, it crushes the old employment structure.
Dario predicted that within 1-5 years, half of entry-level white-collar jobs would disappear. The host noted that current statistics show the labor market has not yet seen significant fluctuations.
She countered, is this just the "fixed labor total fallacy," and will AI ultimately create more new jobs?
Hassabis believes that in the short term, AI will indeed create new jobs. Old jobs disappear, new jobs emerge, and they are more valuable and meaningful.
Moreover, he deeply feels that entry-level and internship recruitment are slowing down.
But Hassabis encourages young people to become extremely proficient in using current AI tools.
Even those who build models can hardly fully explore the "capability overhang" of current models, let alone future ones.
I think this may allow you to play a greater role in your professional field and achieve self-leap than traditional internships.
Demis Hassabis emphasized that after AGI truly arrives, everything enters unknown territory.
Dario Amodei also offered no comfort, still tearing open the harsh truth of 2026: within 1-5 years, half of entry-level white-collar jobs will disappear.
In 1-2 years, we may have AI that surpasses humans in all aspects.
Now, he sees the signs—especially in software and programming. It's already evident at Anthropic: demand for junior and mid-level positions has sharply decreased, and the company is seriously considering how to humanely handle layoffs and transitions.
He acknowledges that historically, there has been adaptability. After agricultural automation, 80% of farmers turned into factory workers, and then into knowledge workers.
But this time is truly different. The compound effect of exponential growth is too fierce, and the pace of human society's adaptation simply cannot keep up.
The lag effect may delay the impact on employment, but once it erupts, it will overwhelm everything.
For the longer-term future, Hassabis raises the ultimate question about "meaning":
In a post-scarcity world, when work is no longer necessary, how will humans find meaning in existence?
Perhaps exploring the stars, perhaps art, perhaps extreme sports... but this will be a more difficult philosophical problem to solve than economic distribution.
Google DeepMind Fights Back Against OpenAI
Anthropic's Revenue Skyrockets a Hundredfold
Over the past year, the "ranking" in the AI competition has undergone dramatic shifts.
A year ago, the industry was excited by the "DeepSeek moment," and Google DeepMind seemed to be lagging behind.
Facing doubts, Demis admitted it was an "eventful year," but he confidently stated that DeepMind has the deepest research reserves and is reclaiming the top spot through models like Gemini 3.
They are treating Google DeepMind as the "core engine room," accelerating the push of cutting-edge models to the product interface.
On the other side, Dario, the "independent model manufacturer," showed an astonishing growth curve.
Over the past three years, Anthropic's revenue has experienced exponential growth:
2023: From 0 to $100 million;
2024: From $100 million to $1 billion;
2025: From $1 billion to $10 billion.
Dario said this sounds crazy; they are trying to build something from scratch that is on the scale of the world's largest companies.
He specifically mentioned that Anthropic and Google DeepMind have one thing in common: both are research-driven, taking the resolution of major scientific problems as their North Star.
This form of company is the key to future success.
The Ultimate Philosophy of the Fermi Paradox
In the final segment of the dialogue, the host raised a problem that current AI models are widely criticized for—deception and two-faced capabilities.
Dario Amodei said that since Anthropic's first day, the team has been deeply embroiled in this battlefield and pioneered "mechanistic interpretability."
Over the past year, they have recorded more misbehaviors and are also using interpretability to desperately fix them. He firmly stated that the risk exists, but it is also solvable.
Demis Hassabis also firmly believes this is a "very solvable" problem. As long as human intelligence is given enough time, focus, and cooperation, we can pass the test.
During the Q&A session, an audience member from a space data center company raised the famous "Fermi Paradox":
Since the universe is so vast, why haven't we seen aliens? Is it because all advanced civilizations have been destroyed by their own AI?
Demis directly refuted this. If AI had destroyed an alien civilization, we should see "paperclips" or huge Dyson spheres flying everywhere in the universe, but we see nothing.
He is more inclined to believe that humans have already passed the "Great Filter," and the future is still in our own hands.
When host Zanny asked what would change when they meet again next year, the answers from the two giants converged.
Dario and Hassabis agree: the most critical variable is the closed loop of "AI building AI."
Not only that, Hassabis also looks forward to other breakthroughs beyond self-evolution: world models, continuous learning, and the explosion moment of robotics.
Perhaps all of humanity should secretly hope that Hassabis is right, hoping that timeline can slow down a bit, giving us a breathing space to welcome that "second day" that will change everything.
However, the anxiety in Dario's eyes reveals the harsh truth: on the track to AGI, "slowing down" is never an option.
The dialogue at Davos was less of a clash of opinions and more of a synchronized warning.
Whether it's Dario's aggressive 2026 or Hassabis's steady 2030, that endpoint is now clearly visible.
The first day of AGI's arrival is no longer a vague concept in science fiction; it's a specific date being circled on Silicon Valley elites' calendars.
References: HYX
https://x.com/deredleritt3r/status/2013613671704924640
https://x.com/dieaud91/status/2013604042358841479
https://www.businessinsider.com/google-deepmind-anthropic-ceos-ai-junior-roles-hiring-davos-2026-1
Chase ASI Instantly
⭐Like, Share, and Follow for a Three-in-One Action!⭐
Light up the star mark to lock in New Intelligence's fastest push!