Top Engineers Will Remain in Short Supply! Google's CEO Reveals Betting Logic: The Outside World Severely Underestimates Google! We Are in an Era of 10x Expansion, AI and Traditional Products Are Not Zero-Sum, Memory is the Core Bottleneck

Image
Image
Editor | Yun Zhao

On April 7th, Google CEO Sundar Pichai rarely accepted a "dinner table" style interview!

During the interview, the two interviewers, John and Elad Gil, consistently maintained a spirit of "getting to the bottom of things" as they questioned Pichai. Beyond the widely discussed topic of "surpassing OpenAI," they also tackled the most pressing issues currently concerning the industry.

For instance, Pichai seriously offered a judgment: 2026 will become the "Year of Supply Tightness." He broke down several damping factors encountered on the path of AI development: energy, computing power, approvals, and what he believes is the most urgent bottleneck to solve: memory constraints.

Another example is a significant trend: Will "search" products still exist in the next decade?

Pichai gave a somewhat surprising answer. He believes that in the future, "Google Search" will evolve into an "Agent Manager." He also shared an important product viewpoint: AI Native products and traditional mobile products are not inherently in a zero-sum competitive relationship.

This is precisely where he believes the outside world has most underestimated Google.

If you view it through a zero-sum lens, the situation seems difficult. But if you continue to innovate and constantly evolve the product, it won't become a zero-sum competition.

We are working on both Search and Gemini simultaneously; they will overlap in some areas and clearly diverge in others.

Additionally, Pichai reviewed the fact that "Google released the Transformer early, yet OpenAI was the first to launch ChatGPT." The reason was not, as some outsiders claim, that "Google is strong in research but weak in products," but rather a long-term balance and trade-off between product and technology.

The reality is, you conduct the research and achieve huge returns as expected, but you cannot invent every single product based on it. That is simply normal.

At the same time, he offered new perspectives on numerous long-term tracks such as space data centers, quantum computing, autonomous driving, and robotics.

He admitted that betting on "space data centers" now is like contemplating an investment in Waymo in 2010; it requires a 20-year perspective and is itself an extremely complex issue. He even candidly stated that he wished they had increased investment in Waymo earlier.

Regarding quantum computing, Pichai pointed out that intuitively, the core value of quantum computing lies in better simulating nature. "Since nature itself is a quantum system, using a quantum system to simulate it offers distinct advantages."

There are many complex systems in reality where quantum computing will show its strengths. For example, the Haber process in fertilizer production, which is still not fully understood, as well as weather simulation and real-world simulation fields.

Once a technology scales, human creativity will find applications on top of it.

Here, Pichai also provided his own logic for judging betting tracks:

Make deep technology bets early. The decision-making method is not as profound as outsiders imagine; it only involves two things: Intuition + Long-term Value.

You think about the Total Addressable Market (TAM) and potential value of a project 5 to 10 years down the line, push the growth assumptions to a very aggressive level, and then judge whether the investment is reasonable.

Simultaneously, another highlight of this interview is that Pichai rarely shared many vivid internal details of Google.

For example, Pichai mentioned that his first "AGI moment" was not recent, but the demonstration of neural networks recognizing cats in 2012. Recently, when writing code, he doesn't even care what language it is; he just watches the Agent complete the task. This "magical" experience is already happening.

In terms of work methods, he has two very practical habits: First, forcing himself to be a "power user," such as engaging in AI conversation for 30 consecutive minutes. Second, looking directly at the rawest user feedback rather than relying solely on reports and data.

More notably, he has started using AI agents to help with internal analysis, such as automatically summarizing product pros and cons. This seems to imply that the role of the CEO is also undergoing significant changes in the context of Agents.

Another point that must be mentioned is that Pichai denied a common industry comparison: comparing "token costs" with "engineer salaries." He pointed out that the premise of this logic is flawed because everyone overlooks the fact that once AI supply meets demand, the software market is highly likely to expand by 10 times.

"You must know that excellent software engineers have always been in short supply. Once supply increases, the market itself could expand 10-fold."

So, the implication is that the SaaS industry is not dying!

Below is the latest interview content from Google's CEO, carefully organized for everyone. Enjoy!

Google was 9 months later than OpenAI,

but it's not that they failed to turn Transformers into products

Host: Sundar Pichai has just completed his tenth year as Google's CEO. Today, Alphabet is not only one of the world's largest tech companies but also a leader in the AI race, planning to invest $175 billion in capital expenditures in 2026. Cheers.

When talking about Google and AI, a piece of history often brought up is: The Transformer was invented at Google, but later the productization happened mostly outside of Google, especially products like ChatGPT. How do you view this now?

Sundar Pichai: I think there are quite a few misunderstandings here. The emergence of the Transformer was accomplished against the backdrop of massive TPU deployment; to some extent, it was to solve specific product needs. For example, the team was thinking about how to improve translation.

Another example is TPU. When speech recognition became usable, but you suddenly needed to serve it to 2 billion people, there simply weren't enough chips; you had to solve the inference problem.

Although the Transformer came from the research team, it was driven by product problems and was almost immediately put to use. Many people underestimated the impact of models like BERT and MUM because our metrics for search quality are extremely strict. During that period, some of the largest improvements in search quality came precisely from BERT and MUM.

We built the Transformer and immediately applied it to search to improve language understanding, including understanding web content and user queries, while continuing to build stronger models. At the same time, we were pushing for productization internally, such as a team working on something called LaMDA.

Obviously, we were not the first to bring such products to market. But the issue isn't that we "just did research without moving to products." The reality is, you do the research and get huge returns as expected, but you can't invent every product based on it. That's normal.

I can even go a step further: we actually conceptualized a product like ChatGPT—that was LaMDA. You might remember, at the time, an engineer thought it had "consciousness." You could understand it as an early version of ChatGPT, used for internal conversations.

In a sense, we already had this product version in "another universe." Google was perhaps about nine months late in releasing something similar.

Actually, at Google I/O 2022, we launched the AI Test Kitchen, which was essentially LaMDA. But we put restrictions on it because, at the time, we didn't have a complete version tuned with RLHF. The version I saw was toxic in some aspects; it simply couldn't be released directly at that point.

Additionally, as a company that has long prioritized search quality, we have a higher threshold for "what product quality is acceptable to go online." But this doesn't mean we weren't thinking about how to launch it.

Consumer Internet Norm: Occasionally, a few key product moments emerge

But the iPhone won't be born in a garage

Sundar Pichai:

I also want to add: Even when OpenAI released ChatGPT, their collaboration with Microsoft was only a few months earlier. In hindsight, everything seems clear, but it wasn't that obvious at the time. They also had some luck in programming scenarios, such as seeing more obvious leap signals thanks to GitHub.

Perhaps we did miss some signals. In programming scenarios, you can see the "step-change improvement" in capabilities more clearly compared to pure language scenarios. For example, the improvement from GPT-2 to GPT-3, and then to GPT-4, was more perceptible in programming.

Returning to your question, I don't think this matter is just about "research not being converted into products"; it is the result of multiple factors working together.

I remember talking to some people involved in the ChatGPT project; they released it during Thanksgiving week, and at the time, it felt almost like a "quiet launch." It wasn't that kind of heavy-hitting release where "this will be our core product in the future." Later, its explosion was actually a surprise.

But from my perspective, if you are in the consumer internet, these "accidental moments" are bound to happen.

Back when Elad and I were at Google, there was Google Video Search, and then YouTube appeared. The result was that we acquired YouTube. Or take another example: if you were at Facebook, Instagram appeared, and Facebook eventually acquired it.

When these things happen, people don't view them dramatically because the end result is an acquisition.

But the cognition I've formed is: In the world of consumer internet, there will always be a few people building various prototypes and trying countless ideas. I'm not diminishing any achievements, just saying that these "suddenly emerging key product moments" will inevitably recur.

You won't see someone suddenly build a better iPhone in a garage, but the consumer internet doesn't operate that way. You need to accept this and build this cognition.

Core Characteristic of Excellent Products: Speed

Host: When I look at the AI race in 2026, one thing is obvious: Google has always taken "speed" as a differentiated advantage. The earliest Google Search was very fast, even displaying the query time directly. Later came Gmail's fast search and Chrome's speed advantage. Now, as I use various AI products, Gemini running on TPU is still very fast.

I'm curious, is this a clear product strategy, or something more complex?

Sundar Pichai: I have always regarded "speed," or "latency," as one of the core characteristics of an excellent product. It usually also reflects whether the underlying technology of the product is solid.

Of course, there is another kind of "speed" that is equally important: the speed of product release and iteration. Both are critical.

But if we only talk about latency, it's not that simple. Because you are constantly adding capabilities, and the boundaries of capability are continuously advancing, you must balance capability and speed, which makes it complex.

For example, in the search team, many sub-teams now have strict latency budgets, precise to the millisecond. If you release a feature that saves the system 3 milliseconds, you might only get a "budget credit" of 1.5 milliseconds, part of which is returned to the user.

Depending on the task, some teams may have a 30-millisecond budget, others 10 milliseconds. You can use these budgets, but you must pass strict reviews. This is how highly we value this matter.

From a human perception perspective, differences can be detected within a range of a few hundred milliseconds.

If you look at the data, over the past five years, we have reduced search latency by about 30%. Meanwhile, features have continued to increase.

This is also why in Gemini, we attach great importance to the balance between "capability and speed." For example, the Flash model can achieve about 90% of the Pro model's capability but is faster and has lower deployment costs, plus the advantages brought by vertical integration.

Will Google Search still exist in ten years?
Pichai: It will become an Agent Manager in the future

Host: So how do you view the future of search? Nowadays, many people believe that "conversation" will become the new interaction method. Google has already introduced Gemini or AI results into search. But some are also discussing agents, suggesting that in the future, everyone will have a personal agent that directly completes tasks for them, rather than inputting queries.

What do you think? Is the future of search a distribution mechanism, a product itself, or just one of many ways humans interact with the world?

Sundar Pichai: I think a core characteristic of search is that every technological 变革 makes it more powerful. We need to constantly absorb these new capabilities and push the product boundaries forward.

For example, in the mobile era, products evolved rapidly. When you walk out of a New York subway station, you aren't looking for a webpage; you want to go somewhere—you need "how to get there." User expectations are always changing, and you must constantly adjust accordingly. Looking ahead, many search requests that were originally just about "getting information" will become agent-ized processes in the future. You aren't looking for an answer; you are completing a task, with multiple threads running simultaneously.

Host: So will search still exist in 10 years? Or will it turn into something else?

Sundar Pichai: It will continue to evolve. Search might become an "agent manager," where you complete various things.

To some extent, when I use some systems now, I am already letting a group of agents handle tasks for me. I can imagine search evolving into a similar form, helping you get things truly done.

I understand the core of your question is: If you view search as an input box of no more than one line that returns a bunch of sorted results, will this product form still exist?

But today, in search's AI mode, people are already doing deep research-type queries, which actually no longer fits the definition you mentioned. But users will adapt. I believe that in the future, people will execute long-cycle tasks, and these tasks might be asynchronous.

Life started as single-celled organisms and has now become complex life forms. You can understand this question as: Will that early paradigm disappear?

Essentially, the past "search" will become an agent, and the future interaction interface itself will be an agent. In another 10 years or more, forms like the search box might cease to exist. Device forms will change, and input/output methods will undergo fundamental changes.

But if you always think about things 10 years down the line, it's easy to get stuck. We are currently in a very special stage; you only need to look at the changes one year from now, and they are already steep and exciting enough.

In the past, you might need to plan products for five years later, but now, models will change drastically in just one year. Moving along this curve itself is a very interesting thing.

AI vs. Existing Products: Not a Zero-Sum Competition

Sundar Pichai: I think this is an era of expansion. Many people underestimate this, always thinking it's a zero-sum competition. But in my view, it is far from it. The value people can create is growing along a very exaggerated curve. Once you look at the problem this way, many questions will have different answers. For example, YouTube continued to develop well after the emergence of TikTok and Instagram; there are many such examples.

If you view it through a zero-sum lens, the situation seems difficult. But if you continue to innovate and constantly evolve the product, it won't become a zero-sum competition.

We are working on both Search and Gemini simultaneously; they will overlap in some areas and clearly diverge in others. I think having both paths is a good thing, and we should embrace this state.

One year ago, the market's misjudgment of Google:

The entire market will expand 10x, it's not a zero-sum game

Host: What impressed me deeply is that about a year ago, in the spring and summer of 2025, market sentiment towards Google was very pessimistic.

The mainstream view was: Search is done, the core business model is shaken, and the company will struggle. At that time, Google's stock price was around $150. Looking back now, that judgment seems a bit ridiculous. After all, Google has a complete layout across the entire technology stack, from applications and models to TPUs, as well as various business layouts like Waymo and YouTube.

What do you think investors misjudged at that time?

Sundar Pichai: The discussion at that time was actually highly concentrated on one point. But for me, at that moment, it was clear: the entire "Overton window" of discussion had changed, and the company itself was prepared for this change.

This "vertical integration" was not accidental. We have already reached the seventh generation of TPU. I remember announcing the TPU at Google I/O 2016 and starting to talk about "building AI data centers."

That was back in 2016. The company has been operating in an "AI-first" manner for a long time; our understanding of this transition is very deep. From the perspective of frontier large models, we did lag behind a bit at that time.

But we already possessed all the capabilities internally; the key was to execute well. What excites me is that, from a full-stack perspective, we have research teams, infrastructure teams, platform capabilities, and are continuously investing in multiple businesses.

Suddenly, you realize: there is a general-purpose technology that can accelerate all these businesses simultaneously. From Search to YouTube, to Cloud, and then to Waymo, all rely on the progress of this technology.

This is an extremely leveraged way of growth. I never viewed that moment as a zero-sum competition. I am more inclined to believe that everything will expand 10 times, and there will still be room for other participants. Looking back at history, Amazon developed very well after Google appeared, and so did Facebook.

We often underestimate the space for overall growth. Of course, the company needs to execute better; that is the key.

Responding to Gemini surpassing OpenAI:

It's normal for two or three labs to compete with some ahead and some behind

Host: Was there a specific moment that made the outside world realize "Google is actually fine"? For example, Gemini 3?

Sundar Pichai: I don't pay too much attention to specific timelines, but what really made people perceive the change might have been Gemini 2.5, especially entering the frontier in multimodal capabilities.

This cannot be separated from the efforts of the Google DeepMind team. Right from the start, we paid a higher cost to design Gemini as a native multimodal model.

Later, in some scenarios, this advantage began to show, such as cases like "Nano Banana," allowing people to intuitively feel the integration of capabilities.

But this field changes very quickly. Now, there are probably two or three labs competing intensely with each other. You might feel "we are doing very well" one month, and the next month discover "there are still some areas where we are left behind." The entire frontier will continue to change dynamically; this is inherently the state this field should be in.

Responding to doubts: Is Google's belief in AGI not strong enough?

Host: I've talked to some researchers (not at Google), and they have a consensus: they feel the difference between Google and the other two or three labs is that Google isn't as "AGI-pilled."

In other words, the belief that AGI is coming soon isn't as strong. How do you see this? Won't this affect your judgment of the future, and thus affect what you are building?

Sundar Pichai: We increased our capital expenditure from $30 billion to nearly $180 billion; this is real money. You wouldn't make such an investment if you didn't believe in this curve.

I think this is more of a semantic difference. Perhaps because we are a larger company with many products serving a large number of users at different levels, our expression style is different.

But if you look at the founding team, they are actually very "AGI-pilled." Some of my earliest conversations already reflected this. Saying that Google doesn't understand AGI, or that people like Demis Hassabis and Jeff Dean don't understand, is untenable. At one point, Demis, Jeff, Ilya Sutskever, and Dario Amodei were all in the same system.

Sometimes I feel this kind of statement is like asking: "Have you really been paying attention for the past 20 years?"

Of course, some differences do exist. For instance, young companies, pure research institutions, or headquarters located in San Francisco—these factors will bring stylistic differences. But at a fundamental level, regarding the understanding of the technology development curve and how to internalize this technology, there is no essential difference.

Inside the company, there is also a group of people who have always been at the forefront, continuously experimenting with agents, seeing how they acquire skills and complete tasks. If you look back three months ago and now, their capabilities have changed significantly.

The "AGI Moment" in the Eyes of Google's CEO

Host: We are actually experiencing this exponential change firsthand. On one hand, we can review Google's history; on the other, I saw someone post on Twitter saying that to understand what's happening in Silicon Valley now, you have to realize—every tech company executive is now a bit "AI-crazy," spending huge amounts of time writing code, talking to AI, and so on. I think this statement is quite interesting and not entirely a joke.

I'm curious, during this period, what were your moments of "feeling AGI"? Or, how "AI-obsessed" are you now?

Sundar Pichai: The first time I had that feeling of an "AGI moment" was in 2012, when Jeff Dean demonstrated the earliest version of Google Brain, which was the moment the neural network recognized a cat.

Later, Larry Page and I went to see the DARPA Grand Challenge, around 2014, watching autonomous vehicles running in it.

Then, Demis Hassabis showed early models that already possessed a certain kind of "imagination." There were many such moments, so the fact that technology is continuously progressing has always been very clear.

If we talk about this more "intuitive feeling" now, I think the closest is: when you are writing code, give it a complex task, and you don't even need to open an IDE, but rather in an agent-managed environment, watching it complete the task, that feeling is very strong.

You could call that moment "feeling AGI." Such moments do occur.

I recently did a small project, and halfway through, I even thought: "What programming language is it actually using?" This was a detail I asked about only after the whole system was running. This experience feels a bit like magic.

What is truly surprising is actually the slope of this curve. You are advancing on many different paradigms simultaneously, and it is obvious that progress will continue in the future.

CEOs also often "eat their own dog food":

Forcing oneself to use as a "power user"

Host: Speaking of this "intuitive feeling," I think there is a key question for tech companies: How does the CEO maintain a connection with product experience and real users?

Because tech products are too abstract, it's easy to manage only through team reports, PPTs, and data sheets.

For example, Tony Xu (Co-founder of Meituan) would personally work as a food delivery rider to maintain perception of the product experience. Internally, we also do things like having a "walk the store" segment in weekly all-hands meetings, ordering through the product interface together, and complaining about "why is this pop-up here?" or "this feels weird," letting everyone truly use the product.

How do you do it at Google? Besides using products like Gmail every day, how do you ensure you are really close to the user experience?

Sundar Pichai: I use internal versions, truly "eating our own dog food." I specifically set aside time to concentrate on using these products; this is very important.

For example, two weeks ago, while stretching at the gym, I opened Gemini Live on my phone and chatted on a single topic for 30 consecutive minutes. You need to proactively do these things. Some experiences are good, some are frustrating, but you learn a lot. I force myself to use products as a "power user" to maintain this connection.

Additionally, X (formerly Twitter) is also very helpful because you can see the most direct user feedback. For example, someone says "Thank you for fixing the Google Calendar issue, it's great," and at the same time, you realize there are other issues that need fixing.

These raw comments are very valuable; I go to look at them directly. Another very helpful way is that I use internal tools to make queries. For example, asking in the internal version of Antigravity: "How is the feedback on this feature we just released? What are the top five worst points? What are the top five best points?"

Now AI agents help me organize this directly. This has indeed made my work more efficient. Previously, I had to spend a lot of time piecing this information together; now agents can help me complete this task.

Of course, there is also a new question here: How much time should I spend experiencing it personally, and how much time should I rely on these tools? I am also adapting to this process.

Comparing token costs with engineer salaries

is the wrong perspective

Host: You mentioned two points earlier: First, this is not a zero-sum competition; second, productivity will significantly improve. But looking back at past waves of technology, such as the internet, mobile internet, and SaaS, they took a long time to reflect in GDP. In this AI wave, we have already seen the pull on GDP from data center construction.

If we look 3 to 5 years into the future, do you think AI will make the US economy bigger? By how much?

Sundar Pichai: These investments must eventually yield returns. I remember about two and a half years ago, someone (maybe Sequoia Capital) wrote an article saying that the current investment scale and returns were mismatched.

Since then, the investment scale may have expanded 10 times. At some point, these two will definitely realign. One thing is very clear: supply is currently constrained. We see strong demand across all application scenarios.

I have no doubt that this is a huge market opportunity. And there are many underestimated aspects. For example, people often discuss software engineering budgets, comparing token costs with engineer salaries. But the reality is, excellent software engineers have always been in short supply. Once supply increases, the market itself could expand 10 times.

In other words, the scale of the software development market is far larger than what everyone imagines now; using "token vs. engineer" to measure is a wrong perspective. I think it will drive growth in many fields together.

Host: How big do you think this growth will be approximately?

Sundar Pichai: If we look back at the internet, its 体现 in GDP data actually cannot fully reflect the changes we truly feel. One could even say that without the internet, GDP might have been negative growth. There are also factors like "consumer surplus" that are hard to quantify. Looking forward is actually very difficult to judge precisely.

I think there are natural "damping mechanisms" in society. For example, the speed of computing power construction and the speed of model capability improvement are themselves two different curves, which has formed a constraint.

Another example is how technology diffuses in society. We can see this with Waymo. Even if autonomous driving is safer than humans, you must advance the landing pace very cautiously.

How technology is responsibly introduced into society is itself a limiting factor. These layered constraints will all affect the final result.

But what is certain is that the US economy is much larger than it was 10 years ago. If AI can increase the growth rate by even 0.5 percentage points, that is a huge increment. I expect development in this direction.

Bottlenecks in AI Development: Supply Constrained

Currently, the main bottleneck is memory production capacity

Host: You just mentioned "supply constrained," which I think is a very key characteristic of 2026. You previously said capital expenditure would be between $175 billion and $185 billion, roughly $180 billion.

Interestingly, even if Google wanted to spend $400 billion, it couldn't spend it—because there isn't enough memory, not enough electricity, not enough of various components, and not even enough electricians. Can you systematically talk about these bottlenecks?

Sundar Pichai: Fundamentally, you still have to return to basic constraints like wafer production capacity. For example, wafer starts are a hard limit. In comparison, power and energy issues are easier to solve.

Approval processes and the regulatory environment will become another constraint, affecting the speed at which you advance projects. Even in places like Texas, Nevada, and Montana that encourage development, where there is a lot of land, the overall advancement speed is still limited.

I think we have made great progress, but for the US, this is still a very critical issue. You will see that China's construction speed is very fast; this is impressive.

We indeed need to learn to build faster. We even need to change our mindset, to think about "how to increase the construction speed of the physical world by 10 times."

However, I am also worried that resistance will appear here. Things won't be solved just by a few people deciding "we want to build faster," such as data center restriction policies.

So to summarize, the main bottlenecks include: wafer production capacity, approval capabilities, and overall execution speed. The government is actually working to improve these issues; everyone realizes we must do better.

Further down is key components in the supply chain, such as memory. In the short term, we are indeed constrained, but the entire industry will respond.

Host: So is memory the bottleneck you are most concerned about?

Sundar Pichai: Memory is indeed one of the most critical components at present.

Host: Then in the short term, will this be solved through price increases and capacity expansion?

Sundar Pichai: Leading memory manufacturers cannot significantly increase capacity in a short time. So there will be constraints in the short term, but they will gradually ease over time. At the same time, these limitations will also drive innovation; for example, we will increase system efficiency by 30 times. These things are happening simultaneously.

Memory will be like computing power: whoever has more has the advantage

But there are exceptions: Gemma 4 can even fit into a USB drive

Host: Will this lead to an oligopoly market structure? Because now it's very much like a game of "musical chairs" for computing power—whoever has the computing power leads.

If everyone can only obtain resources proportionally, then it actually limits the gap between each other. Is this judgment reasonable?

Sundar Pichai: I think this is a reasonable analytical framework. But there are also some factors that will break this judgment. For example, we just released Gemma 4, which is a very excellent open-source model.

Some Chinese models are also very strong, but outside of China, Gemma 4 is a very competitive open-source solution. Interestingly, the gap between frontier models and Gemma 4 seems large in terms of "capability," but not that far in terms of "time." Gemma 4 is based on the Gemini 3 architecture.

What's even more wonderful is that such a model, is essentially a set of weight files that can fit into a USB drive.

Host: Indeed, that's crazy.

Sundar Pichai: Yes. You might run a data center for several months, and 最终 what comes out is a "flat file," just like a Word document; this is the model itself. This characteristic makes the whole problem very interesting and makes us rethink whether these frameworks hold.

In the inference stage, your judgment just now is reasonable. But the entire industry is also trying to find ways to break through these constraints through capital incentives.

However, the reality is, memory supply in 2026 or 2027 cannot be solved immediately just by capital. During this period, we might see greater differentiation between models.

But at the same time, wafer production capacity is increasing, and data center approvals are advancing, so these constraints are not as absolute as they appear. You must look at all elements together, including capital.

Host: The current situation is a bit like the "Strait of Hormuz"; even if oil prices are high, if 20 million barrels of supply are missing daily, demand of the same scale must be eliminated. Memory is similar; someone will always fail to get the resources they want.

Constraints will stimulate creativity

Sundar Pichai: Indeed. And there are other constraints, such as safety. These models are likely to "break" a large number of existing software systems. It might even be happening now.

Host: Do you mean all software? Systems like SSH have been attacked for many years.

Sundar Pichai: I refer to broader software platforms, such as the number of zero-day vulnerabilities. These system-level constraints cannot be ignored. I heard that the price of zero-day vulnerabilities on the black market is dropping because AI has increased supply; this is actually a very interesting market signal.

Host: That does sound quite reasonable.

Sundar Pichai: How technology diffuses in society will bring various impacts, and there may also be some "hidden constraints" and systemic shocks. But even so, I still believe the future space is huge. To some extent, these constraints are even beneficial. Constraints stimulate creativity, forcing the system into a "compression cycle" and improving efficiency.

At the same time, it will also force some important discussions that wouldn't have happened otherwise. For example, on safety issues, we obviously need more coordination, which hasn't been achieved yet.

In the future, there may be some kind of "critical moment" where these problems explode集中 ly. These are unavoidable.

Those "outrageously long-term" projects Google is investing in:

Autonomous driving, quantum computing, robotics, real-world simulation, drug discovery

Host: Speaking of this, Google actually possesses a very strong portfolio of assets. You have both self-developed and invested ones. For example, you hold shares in SpaceX, have invested in Anthropic, and own the majority stake in Waymo.

Internally, there is also a large accumulation of technology, such as Transformer, TPU, quantum computing, etc. Besides these, are there any "hidden treasures" that the outside world underestimates but might have a huge impact in the future?

Sundar Pichai: We have always been doing one thing: continuously advancing those long-term projects that looked a bit "outrageous" when first released. For example, we are now in a very early stage, starting to think about "space data centers." You just mentioned "constraints stimulate creativity"; this is a typical example. If you look at it with a 20-year perspective, where these data centers should be built is itself an extremely complex issue.

These are the projects we are thinking about today; they are like Waymo in 2010. Quantum computing is also one of them; we are advancing very firmly, and I am very excited about this.

Host: In which fields do you think quantum computing will have the greatest impact? Nowadays, people mainly talk about molecular modeling and cryptography, as well as quantum-safe encryption.

But in molecular modeling, deep learning models are actually already very strong, such as what you did with AlphaFold. Do you think quantum computing will really bring changes?

Sundar Pichai: From an abstract level, the core value of quantum computing lies in better simulating nature. Since nature itself is a quantum system, using a quantum system to simulate it offers advantages. Of course, it is also possible that we will achieve similar effects on certain problems through classical computing, or through compression and abstraction.

But I intuitively believe that quantum computing will have an advantage here. For example, we still do not fully understand the Haber process in fertilizer production; there are many such complex systems.

I am more inclined to think that in fields like weather simulation and real-world simulation, quantum computing will have an advantage. But the law of technological development is: once a technology scales, human creativity will find applications on top of it.

I often give an example: smartphones + GPS ultimately gave birth to Uber. When people were making phones back then, no one would have predicted this result.

So I believe that if quantum computing becomes truly usable, it will bring a large number of applications.

Host: Just now you were talking about some of Google's "long-term projects."

Sundar Pichai: Yes. Google DeepMind is now deeply invested in the direction of robotics.

Actually, we were a bit "too early" in this field once, about 10 to 15 years ago; many ideas lacked key elements, and that element was AI. Now, based on Gemini robot models, capabilities in spatial reasoning and other areas have reached SOTA levels.

We are also re-collaborating with companies like Boston Dynamics and Agility Robotics to advance progress in a more determined way. At the same time, there are also many very excellent startups externally that we are investing in.

For example, the space data center I just mentioned, and drone delivery through Wing. We are expanding the scale of Wing; in a relatively not-too-distant time, about 40 million Americans will be able to use Wing's delivery service.

The characteristic of these projects is: long-term, continuous, compound-interest-style advancement. We are committed for the long haul.

There is also Isomorphic Labs; this direction is also very exciting. Its idea is to use models to optimize every link in drug discovery. Even if there are still long-cycle processes like Phase III clinical trials afterwards, you can reach that step with a higher probability of success. Compared to only focusing on molecular design, this method covering a more complete process is smarter.

Google's Capital Betting Method:

Make deep technology bets early

Host: I want to change the question. I am curious about how Google does "capital allocation." So-called good capital allocation is putting resources into the "highest value uses," which is essentially a judgment on opportunity cost.

In business school examples, like Boeing, you can choose to bid for a defense contract or develop a brand new civil aircraft, then compare IRR and choose the higher one.

But at Google, these projects differ too much. For example, you can give YouTube more budget to optimize recommendation algorithms, thereby increasing user duration and monetization; you can also invest in Waymo to make it commercialize faster; or invest in an AI technology that might only pay off in five years.

When the nature and return curves of these projects are completely different, how do you compare them and make decisions?

Sundar Pichai: This is a very good question, and now more than ever it is obvious—especially in TPU resource allocation. Even projects like Waymo need to use TPU, which makes the problem more prominent.

By the way, I very much look forward to AI helping with this matter. As long as we can connect the data, models actually already possess the capability to participate in such decisions. But the current bottleneck is that the data hasn't fully flowed yet.

Historically, one of Google's advantages is: we often make decisions early in the cycle.

This is related to our "deep technology-oriented" gene.

In the very early stages, the scale of investment you make is actually not large, but once the direction is confirmed, you will persist for a long time, while ensuring that the underlying technology continues to advance.

For example, with quantum computing, we will set some key indicators, such as the error rate of logical qubits, stability thresholds, and whether goals can be reached at specific time points, using these to evaluate progress.

One point we have always persisted in is: making deep technology bets early. In continuous decision-making, I tend to use a method of "intuition + long-term value" to look at problems. You think about the Total Addressable Market (TAM) and potential value of a project 5 to 10 years down the line, push the growth assumptions to a very aggressive level, and then judge whether the investment is reasonable.

TPU is a good example; we have been continuously investing. Waymo is also an example; when the outside world started to become pessimistic two or three years ago and many people withdrew, we instead increased our investment.

The experience of this product is very "magical." Now, whenever I have the chance, I use Waymo to commute to and from work every day.

How does Waymo quantify underlying technology?

Host: Waymo actually reflects this question well. Google also cuts projects, such as closing some X Lab projects. But Waymo, from demonstration to true commercialization, experienced a very long time, and you never gave up.

What did you see back then? Was this a qualitative judgment or a quantitative one? Why give up Loon but persist with Waymo?

Sundar Pichai: The key lies in the quantifiable progress of the underlying technology. Taking Waymo as an example, we focus on the capabilities of the autonomous driving system itself, such as the improvement in safety and reliability. This is a long-term task; you need to continuously evaluate how safe the system is and whether it is making progress.

You need to look along that technology curve, set goals, predict progress, and constantly evaluate actual performance. I think the team has done an excellent job in this regard. There are indeed stages where progress slows down, but at such times you need to believe in the team's ability and trust that they can break through bottlenecks. My own experience is that if you can evaluate projects from a more underlying technical dimension, you can often make better judgments.

Autonomous Driving + Robotics: With end-to-end solutions, progress will indeed be faster

Host: I've heard a viewpoint about Waymo: the recent huge progress largely comes from shifting from the "hand-crafted rules + maps" method to end-to-end deep learning, which is the breakthrough brought by the Transformer wave. If Waymo had started 5 years ago instead of 15 years ago, would it have reached the same progress now?

Sundar Pichai: We mentioned robotics earlier; actually, you can view Waymo as a robotic system. Teams that entered the robotics field only in the past three years might indeed progress faster.

But Waymo is a highly complex system integration engineering project, somewhat like TSMC or SpaceX; you need to integrate at an extremely complex level. There is a lot of "implicit engineering capability" in Waymo, including how to build systems and how to advance these works; these all require time to accumulate.

Of course, I also believe the end-to-end method played a key role in this process. More importantly, Alphabet and Google have always persisted in investing in this team, letting it wait for the moment of technological explosion; this itself is a huge advantage.

Core Principle for Increasing Investment in Innovative Businesses: Evaluate Returns

Host: So how can this experience be migrated to other fields? For example, robotics; it now seems like it can be advanced faster. Will you bring hardware capabilities back in-house, or continue to rely on partners?

Sundar Pichai: We will maintain an open attitude. But from the experience of Waymo and TPU, if you really want to push the technology curve forward, especially in areas involving safety and regulation, you need to personally participate in the complete product feedback loop.

So I think, possessing first-party hardware capabilities will become very important.

Host: I have two more questions about capital allocation. In the past, Google has always maintained strong cash reserves and been relatively "conservative." But considering you have a large number of innovation opportunities and the core business continues to grow, from a return-on-investment perspective, should you be more aggressive, such as increasing leverage to put more funds into new projects or core businesses?

Sundar Pichai: This is a very good question. For example, if Waymo had reached this stage earlier, I would certainly have invested more capital earlier.

Essentially, you are evaluating returns. If you are very confident in the Return on Invested Capital (ROIC), then you should invest as much capital as possible.

But if you think part of the funds cannot temporarily find sufficiently high-return destinations, then you need to manage cautiously. This is also why we invest in other companies, such as Stripe, SpaceX, Anthropic, etc.

Our core principle has always been: be a responsible capital manager. And in the current stage of AI transformation, we do see more opportunities to deploy capital efficiently, so we are also increasing investment.

Looking back, I certainly wish we had increased investment in Waymo earlier, but at that time the technology maturity was not enough. For example, at a certain stage, from a safety perspective, it was not suitable to accelerate advancement.

Overall, I don't really think it was "lack of funds causing slow progress," but rather that the project itself has its natural growth rhythm.

How Google Allocates Computing Power to Engineers

Host: Last question. In the past, the R&D cost of tech companies was mainly "people," that is, engineers. But now, computing resources like TPU have become key investments. So inside Google, how do you do budget allocation? Do you allocate both manpower and computing power to projects simultaneously? Are they the same budget?

Sundar Pichai: Actually, we have always had computing power budgets, even in the era of traditional computing. But in the machine learning era, we use both TPU and GPU heavily, making computing power planning even more important.

We still attach great importance to manpower planning, but computing resources have now become extremely scarce. In some past stages, it was relatively loose, but now the constraints are very obvious. I personally spend at least an hour every week specifically looking at computing power allocation issues, and at a very detailed level.

I will see how much computing power resources each project and each team uses, and evaluate them. This is a very critical task at the current stage. In many cases, the scarcest resource is computing power, so we must ensure these resources are invested in the most valuable projects.

Host: Then in the Google Cloud sector, you have to meet internal needs while also providing computing power externally; how is this allocation done?

Sundar Pichai: We do forward-looking planning. The cloud business team will make plans in advance, and we allocate resources based on these plans, while also taking internal needs into account.

We do forward-looking planning. In this process, you also sign long-term commitments with customers. Any commitment made to customers must be strictly fulfilled; these are contract-level obligations. Many problems are actually solved through advance planning.

In the planning process, everyone is in a resource-constrained environment. The cloud team might also feel that computing power is not enough, but through upfront planning, the whole thing can still operate.

Suggestion for Google One:

More popular direction: AI reads all document APIs

Host: Speaking of Google Cloud, I have saved up a product suggestion to make right now. A point that is done particularly well is GCP / MCP; you let AI directly operate cloud services in a programmatic way, which is very strong. You have opened up almost all capabilities, except for the most core permission management part.

In the past, a problem with Google Cloud was that there were too many functions; after users entered, they needed to create organizations, projects, and then find services, making the path quite complex. But now these don't matter anymore; you just need to say one sentence: "Help me add this function," and AI can handle it.

This actually amplifies Google Cloud's advantage—because its capability coverage is too broad.

We have similar problems at Stripe; functions are increasing, and users find it hard to navigate. But now the best "entry point" is actually an AI that has read all API documents; this experience is already very good.

The value of AI as an "orchestration layer" is beginning to show: whether for products or within enterprises. In the past, if a CEO wanted to integrate all data, it often meant doing a large-scale ERP project; now AI can directly integrate this data, which is a very good experience.

The more complex the product and the more capabilities it has, the more obvious this improvement becomes.

Sundar Pichai: I think we can do even better. But the direction you mentioned is correct; this is a huge opportunity.

Suggestion Two:

Let ordinary users also use long-duration AI

Host: Then I'll continue making product suggestions (laughs). Recently, products like OpenClaw have interested me; they are doing "stateful AI," which means AI that can run long-term and has continuous memory.

For example, automatically organizing news I care about and sending it to me every day; this kind of task requiring "continuity" is not yet well supported by mainstream AI products. Will this capability come?

Sundar Pichai: Directionally, this is the future. You hope users can run long-term tasks in a reliable and safe environment.

This involves issues of identity and permissions, but the overall direction is very clear; this is part of the agent-ized future. Bringing this capability to ordinary users is a very potential direction, and we are also exploring it.

Host: I also agree very much. A company (made by the former Stripe CTO, later acquired by Meta) has already made an early version that allows users to build small applications with "persistence" themselves.

When users experience it for the first time, there is a strong sense of surprise. I feel that future consumer-grade products will essentially be "AI with programming capabilities," plus a suitable running environment, persistence capabilities, and cloud execution capabilities.

Now only a very small number of people (maybe 0.1%) are using this method to build tools for themselves, but truly going to the masses will be a very big opportunity.

Revealing the AI-ization Rhythm of Google Products

Next, it will possess deeper capabilities: Context Management

Host: I have another product suggestion. The search experience in Google Docs is obviously worse than in Gmail.

Email search is easy to use because keywords are more unique; but when searching documents, for example, if I want to find "2026 budget," these words are everywhere in the company, making it hard to find the specific one. Do you have this problem?

Sundar Pichai: I don't have it as strongly as you describe, but once you say it, I can completely resonate. My brain is already thinking about which team to send this conversation to (laughs). I know who to find to solve this problem.

I think we can make this experience better. As AI is deeply integrated into these products, you will see significant improvements in the next few months.

Actually, this is just the first stage, simply adding AI in. Next will be deeper capabilities, such as context management, caching, information integration, etc.; we still have a lot of room for improvement.

CEO's Core Task: Concentric Circle Diffusion

DeepMind's new workflow is promoted to other teams

Host: Many companies I contact, including some just founded, are already reconstructing their development processes, product processes, and even redefining the role of design teams. Is Google making similar adjustments?

Sundar Pichai: This process can be understood as "concentric circle diffusion." Some teams within the company have already undergone profound changes.

One of my core tasks is to diffuse this change to more teams, especially in 2026.

The reason it couldn't be fully rolled out early on is that these tools were not stable enough, a bit like a new world with great potential but not yet mature. But this year, I can clearly feel the curve accelerating.

Like Google DeepMind and some engineering teams, they have already greatly changed their working methods. They are using internal tools (called Jet Ski internally, Antigravity externally), essentially working in an "agent manager" environment.

The team already has a whole set of new workflows; it is a completely different development method. Last week we promoted this system to the Search team and are continuing to advance. In large companies, the difficulty of this kind of change lies in "change management." Small companies can switch quickly; large companies need time.

Four Problems Brought by the Overflow of Large Model Capabilities

Host: I'll add a few problems I've observed: AI capabilities are already very strong, but the degree to which enterprises truly use them is far from enough; there is an obvious situation of "capability overflow but insufficient utilization."

The problems I see can be roughly categorized into several types: First, engineers need time to truly learn "how to write high-quality prompts for AI." For the same task, the difference between a well-written prompt and a poorly written one is huge. Inside Stripe, there is another layer of "prompts specific to the Stripe system," such as knowing which tools to call. So there is both general prompt capability and company-specific prompt capability.

Second, the collaboration issues brought by AI-generated code are also obvious. Because the scope of changes is large and the code iteration speed is very fast, even being rewritten multiple times before officially going online, this makes multi-person collaboration even more difficult. Compared to when the code change rhythm was slower in the past, the current collaboration model hasn't fully caught up yet.

Looking beyond engineering, a bigger problem is data access. For example, if you want an agent to answer: "How many times a day does someone ask 'how is this deal progressing?' in companies around the world?" This information actually exists within the company and should be directly answerable by the agent. We can partially achieve this at Stripe, but when the company scale grows, we encounter permission system problems—who can access which data; this mechanism needs reconstruction.

Next is the issue of role definition. Divisions like engineering, product, and design are essentially products of the past era. As AI capabilities enhance, these roles may merge in certain scenarios.

So my overall judgment is: By 2026, model capabilities will be strong enough, but the actual usage level is still far from enough. How do you view this problem of "capabilities have arrived, but usage is insufficient"?

Google's Solution Approach: Point Breakthroughs

The difficulty lies in identity and permission control

Sundar Pichai: The problems you mentioned are actually the core issues that the Gemini enterprise team and the Antigravity team are solving. This is basically the product roadmap itself.

We are already using these tools internally and have indeed encountered the obstacles you mentioned, solving them step by step. These solutions will eventually become products released to the outside.

The current diffusion method is like this: During the use process, teams will find opportunities locally, such as Google's SRE team finding some processes that can be automated and starting to build workflows. This kind of "point breakthrough" is happening.

But what is more critical next is: How to systematize these capabilities? How to precipitate them into reusable skills? How to be called by models and used by the entire company?

Identity and permission control is a very complex problem; we are solving it. But these are indeed the core factors limiting the large-scale landing of AI.

Additionally, our requirements for safety are very high, which adds another layer of complexity. Because once a mistake happens in a system of this scale, the cost is very high.

However, precisely because we are solving these "high-difficulty problems," once broken through, the capabilities brought will be more robust. Now can be seen as paying the "fixed cost," but once completed, you will see a huge leap in capabilities. Not just us; other companies in the industry are doing similar things.

2027 Will Be a Key Inflection Point for Non-Engineering Fields

Host: Google should do business re-forecasting several times a year, right? We do the same at Stripe; we set the budget annually, then do a re-forecast three times within the year. Essentially, this matter is: input the current business state (some in the brain, some in documents) into a function, and output a new forecast for the whole year.

Do you think the future will be completely done by AI for forecasting? The kind with no human involvement? When will Google see its first "fully agent-ized" forecast?

Sundar Pichai: I think in certain fields, 2027 will be a key inflection point. Even for the personnel doing this work now, their workflows will gradually shift to be AI-centric. In the short term, traditional methods might still be used for verification, but the switch will gradually complete.

I expect relatively obvious changes in 2027; many processes will undergo substantial transformations.

Host: That is to say, the engineering field is the earliest adopter, but processes outside of engineering will start to change significantly around 2027?

Sundar Pichai: Yes. Additionally, you mentioned robotics or Waymo earlier; those types of companies have another advantage: they can be AI-native from the very beginning. Startups can directly build this capability through recruitment and organizational design. For large companies like us, we need to undergo retraining and organizational transformation, which is itself a challenge.

This is a major advantage for young companies, and we need to proactively promote this transformation.

Host: Last question. We talked a lot about cases inside Google that grew from "small projects," such as Transformer. So now, inside Google, are there any projects you think are very small but have great potential?

Sundar Pichai: For example, the direction of "space data centers"; initially, it was a very small team advancing to the first milestone with a very small budget. I think even for very big ideas, they should start from a small scale.

Another example is, I just heard someone talk yesterday about post-training optimization they are doing; it is a very specific, very detailed improvement. But my feeling after listening was that once this optimization goes online, it will bring obvious capability improvements.

This is the characteristic of this era: many seemingly small improvements will produce a large impact. I cannot expand on specific details right now, but it will definitely be released in the future.

Host: Sounds like one is space data centers, and the other is a new machine learning technology path. Very interesting answers.

Sundar Pichai: Thank you, happy to chat about these.

Reference Link:

https://www.youtube.com/watch?v=bTA8sjgvA4c

——Recommended Articles——

70% of Time Spent Fighting Fires, Growth Not the Core! Anthropic's Growth Lead Reveals Claude's Core Secret: Talent + Culture is the Real Secret! Traditional Strategies Failed! Models are Key

Denying Model Capabilities are Becoming "Commoditized"! Google DeepMind CEO Reveals: Extremely High Probability of AGI Within 5 Years, Will Be 10x the Industrial Revolution! Closed Source Still Leads Open Source by Six Months!
Google's Strongest Truly Open Source Model Tops Charts! Tech Report Interpretation by Big Shots: Don't be Misled by "Architecture Unchanged"! Nearly Half the Team is Chinese! Actual Test: Can Run Directly in Browser, Develop Software Without Internet
Image
Related Articles

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.