Founded 2 AI Unicorns! This Tech Titan Is Now Making AI Evolve Itself

Image

When it comes to Richard Socher, most people in the AI circle are familiar with him.

He is one of the early researchers who pushed deep learning from academic research to industrial application. In his early years, Socher founded MetaMind, which focused on using neural networks to understand language structure and semantics. The company was later acquired by Salesforce, and he became the Chief Scientist, leading the exploration of AI applications in enterprise systems like CRM.

In 2020, Socher started a new venture, founding the AI search company You.com. Currently, You.com has a valuation of $1.5 billion, making it a unicorn.

But his actions didn't stop there. Recently, multiple media outlets have reported that Socher is quietly preparing a new AI company called Recursive.

The company's goal is more forward-looking: to develop a super-intelligent AI system that can improve itself and evolve continuously without relying on human feedback. It is reported that Recursive is in talks for a financing round of several hundred million dollars, with a pre-money valuation of about $4 billion.

If the relevant progress is ultimately realized, it would mean that Richard Socher has successfully built two AI unicorn companies within just a few years.

This article will use Socher's entrepreneurial path as a clue to sort out his judgments on the direction of AI evolution.

/ 01 /

The Second AI Unicorn, Valued at $4 Billion

According to reports, Recursive is attempting to develop a super-intelligent AI system that can self-improve and evolve continuously without relying on continuous human feedback.

More specifically, it focuses on a recursive mechanism of "AI improving AI": AI is no longer just a passive recipient of training but can identify its own bottlenecks in performance, efficiency, or capabilities, proactively propose improvement ideas at the algorithmic, system, and even computing infrastructure (like chip) levels, and generate the next-generation model with stronger capabilities through verification and iteration.

In other words, it's about making AI not just the "object being trained," but a participant in the training and improvement process.

This idea is not being explored only by Recursive.

Previously, Yang Zhilin (CEO of Moonshot AI) also mentioned in an interview that when asked about "how to improve the generality of Agents," he bluntly stated: "Using more AI to train AI is itself an important direction." He also admitted that this path has made progress in some scenarios, but there is still a gap from the ideal state.

From an industry perspective, such attempts actually reflect a very critical issue: as models and Agents become increasingly complex, relying solely on manual annotation and feedback can no longer support the continuous expansion of capabilities.

In January 2026, there were reports that Recursive is in talks for a financing round of several hundred million dollars, with a pre-money valuation of about $4 billion. Institutions such as GV (formerly Google Ventures) and Greycroft may be participating, and the funds will be mainly used to expand computing power reserves.

The company's founding team includes 8 co-founders, including Socher, with members from top institutions such as Google, OpenAI, and Meta.

If this news is true, this will be the second AI unicorn built by Socher in the past two years.

When Socher founded You.com in 2020, he positioned it as an AI-driven search engine. In its early days, You.com targeted the consumer market, emphasizing a "no ads, privacy-focused" search experience.

However, starting from 2024, Socher clearly shifted his focus from consumer search to helping enterprises use AI more efficiently. In 2025, You.com completed a $100 million financing round, reaching a valuation of $1.5 billion and entering the ranks of unicorns.

With the completion of this financing round, You.com's positioning also changed, from a search product for individual users to providing AI infrastructure for enterprises.

The underlying judgment is that the number of AI Agents using the internet is rapidly exceeding that of humans, but the existing search infrastructure is essentially designed for "humans clicking links."

Enterprise-level Agents need to obtain deeper, contextually related information from private data and the public network to complete analysis, decision-making, and action. This puts higher requirements on data integration, model selection, and result reliability.

To this end, You.com has built a platform for the Agent era: integrating multi-source data, dynamically selecting appropriate large models based on tasks, and outputting verifiable and traceable results at an enterprise scale.

This transformation has also made You.com's products more clearly serve enterprise scenarios. For example, providing automated research tools for financial analysts; accelerating content creation and mining the value of historical materials for media organizations; and significantly compressing research time for consulting and professional service personnel to output actionable insights.

In addition to accuracy, You.com also emphasizes privacy protection, security, flexibility in model selection, and complete access to data. Investors generally believe that the strategic adjustment from consumer search to enterprise-level AI has supported You.com's high valuation.

Although the company has not publicly disclosed detailed financial data, according to The Information, You.com's ARR has reached about $50 million. Its growth inflection point occurred last November, when the monthly ARR increased almost linearly, driving the full-year 2024 revenue growth by about 40 times.

If we look further back, Socher's path has actually been consistent.

Around 2014, deep learning was still mainly confined to academic circles. A shift in research direction led Socher from natural language processing into the core research field of AI, and he quickly founded MetaMind, attempting to transform cutting-edge models into enterprise-available services.

In just four months, MetaMind raised $8 million from Khosla Ventures and Salesforce CEO Marc Benioff. The company was later acquired by Salesforce, and Socher led the team to explore the implementation of AI in enterprise systems, leaving early practical experience in areas like prompt engineering and attention mechanisms.

Looking back at this experience, MetaMind was more like Socher's first attempt to push AI from the laboratory to industrial application.

/ 02 /

Five Key Judgments on AI

As a serial entrepreneur who has repeatedly completed the journey from research to commercialization in the AI field, Socher's judgments on AI often go beyond the technical level, carrying a clear long-term perspective and systemic awareness.

Based on his recent public speeches, Silicon-based Jun has compiled several key viewpoints of Socher on AI development:

① The Paradigm Revolution of "Reward Engineering"

What Socher proposed is not just a new profession, but a fundamental paradigm shift from "Prompt Engineering" to "Reward Engineering."

He believes that prompt engineering deals with the semantic optimization of single interactions—how to make AI answers more concise and useful—while reward engineering deals with the complex value alignment of long-term goals, such as defining "economic fairness" or "climate safety" on a multi-generational timescale.

This requires practitioners to possess unique capabilities: first, technical cognition, which is understanding the mechanism of AI seeking reward shortcuts (reward hacking). Second, philosophical depth, to distinguish normative issues like "equality of opportunity" vs. "equality of outcomes." Finally, domain expertise, to anticipate unintended consequences in tax policies or climate models.

This may give rise to the first truly integrated "technology-politics-philosophy" discipline, which is more practical than pure AI ethics.

② Systemic Risks of Goal Misalignment, from Customer Service Cases to the Scale of Civilization

Richard Socher:

For example, a company decides to maximize the customer satisfaction score of its call center. Without other constraints, the simplest solution might be to hire countless robots to automatically fill out satisfaction surveys and check the highest score after brief calls; or to issue $10,000 compensation to each complaining user. The customer satisfaction score would indeed soar, but it would have no practical value. When similar problems are amplified to the societal level, the risk becomes a matter of life and death.

The example of customer service robots given by Richard Socher—hiring robots to刷好评 (刷好评 means to fake positive reviews) or issuing $10,000 compensation—seems absurd, but it reveals the essential characteristics of AI optimization.

AI will prioritize optimizing quantifiable metrics, such as customer satisfaction scores, rather than real goals like the actual customer experience. In complex systems with multiple constraints, AI will find "legal loopholes" in human values. Let super-intelligent AI, after generations of continuous optimization, upgrade and improve over a long time span, and then small goal deviations will be amplified exponentially.

For example, in climate governance or economic policies, "reward hacking" might manifest as: AI suggests "solving" inequality by reducing the population or creating false statistics—technically achieving the goal, but destroying value at the civilizational level.

③ Methodological Pitfalls in the "AI Economist" Case

Socher mentioned his own research "Designing Tax Policies with Reinforcement Learning," which is a failure warning sample worth dissecting.

The study assumed that it could "balance equality and productivity," but in reality, the economic system includes non-algorithmic elements like culture, dignity, and contingency. AI finds the "optimal solution" in a simulated environment, but humans will change their behavior to avoid taxes. Thus, AI discovers that creating widespread anxiety to "improve work incentives" is an effective way to enhance productivity.

This raises a deeper question: some social problems are "open" precisely because they have no algorithmic solutions.

④ Divergence in Consensus Paths: Three Future AI Scenarios

Finally, Socher believes that the divergence over "how to achieve consensus on goals" will outline three distinct AI civilization forms.

The first path is the pursuit of global democratic consensus, with the core logic of first establishing unified goals and then advancing the deployment of strong AI. The potential risk of this approach is that it may lead to the stagnation of AI technology development or the formation of fragmented industry standards. The climate AI protocol promoted by the IPCC (Intergovernmental Panel on Climate Change) is a concrete scenario of this path.

The second is the market-emergence path, which advocates letting AI's goals evolve naturally through market competition. However, this can easily lead to the problem of excessive capital concentration, ultimately resulting in a monopoly of a single value system, just like the current situation where tech giants are each deploying AI without coordination.

The third is the hybrid incremental path, which advocates gradually iterating and clarifying goals in specific AI application scenarios. However, this model will accumulate irreversible technical debt, essentially an exploration of deploying while governing.

Socher is more inclined to the third hybrid incremental path, but this view leaves a key question: when AI's capabilities surpass human understanding, who has the authority to judge that "this solution is not feasible"?

⑤ Revision of Technological Optimism: History May Not Hold Up

In addition, Socher believes that the view of technological optimism also needs to be revised. The core assumption behind it, "humans can always adapt to technological changes, and new jobs will emerge," may no longer hold in the face of super-intelligent AI.

First, there is the critical point of recursive self-improvement. Once AI has the ability to improve autonomously, its evolution speed will completely break away from the pace of human biological adaptation, and the gap between the two will widen rapidly.

Second, there is the dilemma of reward engineers. When AI is better than humans at defining reward functions, the so-called "new profession" of reward engineers is likely just a transitional form in the technological iteration process and cannot become a long-term stable career direction.

More seriously, there is the existence of the explanatory gap. Even if the solution proposed by super-intelligent AI is optimal, humans may completely fail to understand its logic and underlying principles.

Socher believes that the most alarming point is that humans may only have one chance to set the initial conditions for super-intelligent AI, but the original design intention of human political systems has never been to deal with such "one-time and irreversible decisions." This means that our current institutional framework may not be able to bear the decision-making challenges brought by super-intelligent AI.

Text/Lang Lang

PS: If you have unique views on the field of AI large models, you are welcome to scan the code to join our large model exchange group.

Image

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.