DeepMind's Operating Model Revealed! Hinting It Never Lost to OpenAI: Employees Spend 20% of Time Restarting Innovation, Turning a Conservative Giant into an 'Experiment Maniac'

Image

Author | Gao Yunyi

Many people know that Transformer was invented by Google. Yet, ChatGPT was not built by Google. Over the past few years, this has almost become Silicon Valley's biggest "regretful footnote."

However, if you truly step into today's Google DeepMind, you will discover that Google is not "slow," but is playing a much larger game.

Recently, Google DeepMind Chief Operating Officer Lila Ibrahim and Google Senior Vice President of Research, Technology, and Society James Manyika systematically deconstructed the changes happening within the company for the first time in a podcast:

  • How does DeepMind actually operate?

  • Why can Gemini iterate a new generation every 6 months?

  • How is Google advancing simultaneously in fields like quantum computing, materials science, weather forecasting, and space computing?

  • And how are AI-native products truly landing, rather than staying in the lab?

If OpenAI is a startup sprinting at high speed, then today's Google is more like a restarted "Modern Bell Labs".

DeepMind's operating model has two core methodologies:

  • Provide direction, not answers.

Set grand research agendas without prescribing paths; researchers enjoy a high degree of freedom.

  • Extensive interdisciplinary research.

Bioethicists, neuroscientists, and computer scientists working at the same table is a daily occurrence at Google.

Behind this mechanism lies another key variable: Demis Hassabis, the soul of Google DeepMind. He possesses precise judgment of timing, capable of setting direction from the top down while allowing innovation to emerge from the bottom up.

For instance, Demis Hassabis judged that by 2026, Gemini will be mature enough to fully absorb DeepMind's accumulation in "Learning Science." This judgment determines how resources are tilted and when products enter an explosion phase.

James Manyika mentioned that the biggest change in the past three years has been the merger of Google Brain and DeepMind, establishing a central AI engine around Gemini. Under this model:

  • Gemini is the underlying infrastructure for the entire company.

  • A major iteration is completed every 5 to 6 months.

  • Once a model is released, it immediately enters core products like Search, Workspace, and the Gemini App.

Meanwhile, Google's lab culture is returning, and on a larger scale than before.

James Manyika revealed that the lab is currently advancing approximately 30 projects simultaneously.

Google has an innovation mechanism famous in Silicon Valley: all employees dedicate 20% of their time to exploration outside their main projects, which continuously spawns great products to feed back into Google. Examples include Notebook LM, which can digest materials, and Flow, an AI filmmaking tool.

If you only look at generative AI, you underestimate Google. Because DeepMind is simultaneously advancing biological research, education systems, materials science, weather forecasting, quantum computing, space computing plans, and more. In many of these fields, Google has already achieved milestone accomplishments.

From breakthroughs in quantum error correction, to a flood prediction system covering 150 countries, to Project Suncatcher which attempts to send TPUs into space for training, these layouts demonstrate a rare long-term perspective.

From "cautious release" to "learning through release," Google has completed a shift in rhythm. This company is redefining what "long-termism" means. When it truly begins to accelerate, have people realized that its chessboard is larger than anyone imagined?

Below are the exciting details from the podcast. For more on Google's latest progress, welcome to watch:

Google DeepMind's Operating Model: Bell Labs and the Apollo Program

Host: DeepMind CEO Demis Hassabis previously described DeepMind as a modern version of Bell Labs on our show. Laya, what does this specifically mean? Can you introduce its research model? Is it a lab model or a corporate operation?

Laya Ibrahim: I want to start with our mission, "Build AI responsibly to benefit humanity," because everything is based on this.

First, we set extremely ambitious research agendas, clarifying only the broad direction without prescribing specific methods. Our approach draws inspiration from the golden age of Bell Labs, references the Apollo Program, and even Pixar. The core is to gather top talent and create an environment that enables their success and allows them to explore freely.

First, define grand research directions, telling teams which fields to focus on, but not dictating how they work.

Second, due to the extremely broad scope of research, we must build interdisciplinary teams. We aim to cultivate a culture where bioethicists, computer scientists, and neuroscientists can work side by side; we believe this is the key to generating breakthroughs and creating value. This approach has already yielded many extraordinary results. We also dare to explore and know how to judge timing. Demis Hassabis is excellent at pacing: knowing when to invest in exploration, set grand goals, and assess progress; and having the courage to decisively stop or double down.

A great example: Over the past few years, we have been deeply cultivating the field of learning science, researching how humans learn and how to improve learning methods. This year, Demis Hassabis judged that Gemini is mature enough to fully inject our accumulation in learning science into Gemini, which is also one of our key directions, thereby enhancing Gemini's capabilities for learners. Google DeepMind indeed has unique strengths in timing judgment.

Host: Let's review the process again. As you just said, Demis Hassabis judged that Gemini is ready to undertake learning science-related capabilities, so DeepMind began to advance. What is the approximate ratio of top-down to bottom-up work at Google DeepMind? OpenAI once described its model as a bunch of startups inside a big company. Is Google a similar model, or more top-down?

Laya Ibrahim: Because our mission is so grand, we need to find the core directions where AI can help humans unravel the mysteries of the universe and address major human challenges. This scope is broad enough that we can conduct meteorological research to improve weather forecasts, or work on AlphaFold (an AI program developed by DeepMind that can accurately predict the 3D structure of proteins, known as a revolution in structural biology). Such protein structure prediction helps understand diseases and develop therapies; we can also continuously optimize generative AI to improve people's lives.

We adopt a very broad portfolio layout, but at the same time leave room for researchers to explore. This is what I said at the beginning: we need to find the right talent, those who are mission-driven, share values, are willing to explore, pursue great impact, and can achieve scale leveraging the Google platform. Demis Hassabis thinks exceptionally well in this regard; he has been deeply cultivating this field for a long time. DeepMind has existed for 16 years; this is almost his life's mission. At the same time, our team is full of creative people who like interdisciplinary collaboration and hope to change the world; they also propose bottom-up ideas and approaches. So it is a combination of both: part is top-down led by Demis Hassabis, and part is bottom-up exploration by the team.

Host: This organizational structure places high demands on management and talent. Let's expand our view to the entire tech industry. There was a time when many tech companies gave top talent great freedom to explore directions with no short-term results visible. Later, suddenly entering the AI race, many companies tied researchers doing long-term projects more tightly to products, and long-term research was almost required to produce immediate product value. Did this change happen inside DeepMind as well?

Laya Ibrahim: I have been at Google for about eight years, and we have indeed gone through a development journey. But the reason Google DeepMind can retain many employees for the long term is precisely because we have a broad enough layout. Some people hope to continue doing deep research in frontier AI, or exploration in scientific directions; we have space to support this pure exploration. At the same time, we can also land generative AI progress, such as the series of breakthroughs Gemini achieved last year.

Host: Let me ask further. Google's internal transformation is described as: no longer letting each product department formulate its own AI roadmap, but having a central engine within the company, an AI department, responsible for building AI capabilities and then empowering various product departments. Can you introduce this process?

Laya Ibrahim: This is also one of the most exciting changes in the past few years, namely the merger of Google Brain and DeepMind, bringing together Google's best AI teams and research forces, allowing us to layout broader fields. As you said, our positioning is an AI innovation engine. But I wouldn't say we are "distributing" technology to other Google teams; rather, we collaborate closely with product departments and users to understand real needs, making models more suited to scenarios from the very beginning, advancing in a collaborative and responsible manner. By the time technology lands in various Google products, it has already undergone extensive testing and can be optimized for specific scenarios. This has also brought good results; for example, after we released Gemini 3, we could immediately open it up for use by a large number of developers and users.

Host: One last question, and then I'll hand it over to James. Our show has an observation: Sundar Pichai worked at McKinsey, and now Google's restructuring, centralization, and re-coordination of teams looks very much like a McKinsey-style approach. Is this true?

James Manyika: I also worked at McKinsey myself, so perhaps I can respond to this organizational structure question. Google's current landscape is very special: on one hand, there is the Gemini project, which is the foundation of all capabilities, building large-scale models like Gemini, Gemini 2.5, Gemini 3, and so on. Three years ago, we integrated the Google Brain and DeepMind teams to launch the Gemini project. Today, this project supports products across the entire company; you can see Gemini in Search, Google Workspace, Notebook LM, and all other products. It is the underlying foundation, which is why Google DeepMind and the Gemini project have become the core engine.

Besides this, the company also has a large amount of deep scientific research, focusing on the most root problems, opening up numerous entrances for research and innovation. We also have many other ambitious projects, such as Genie building world models, and specialized work for Waymo to enhance autonomous driving model capabilities. So it is not strictly top-down, but based on the Gemini project to ensure every rapid iteration. Now we have a new generation of Gemini emerging roughly every 6 months, and it immediately lands across all products without delay. The latest version of Gemini, once launched, appears everywhere from Search to the Gemini App. This is the core change that has happened over the past three years.

The Return of Google Labs and the Landing of AI-Native Products

Host: Let's talk about the lab. People who used Google products early on remember that Google once had an extremely experimental era; later, the lab disappeared for a time. Although experiments never completely stopped, after the lab was restarted, we began to see Google launch a large number of experimental projects; we haven't seen such a scene in a long time. How big a role does the lab play? Why has the lab returned?

James Manyika: The lab is very interesting. Three years ago, driven by Sundar Pichai, we restarted the lab. We were at the node of the AI explosion; we wanted to explore, experiment, and build completely AI-native products. The lab's idea is: take Google DeepMind, Google Research, and all top research results within the company, and focus on building experimental AI-native products.

The most familiar one should be the current Notebook LM (an AI-native research and learning tool launched by Google Labs based on the Gemini model; its core is letting you "feed" your own materials to the AI, allowing the AI to provide services based on your exclusive content). Its origin is very interesting. Initially, it was called Tailwind, with only four or five people working on it. The idea was to build an AI-native research tool that works based on the user's own content. You can put materials, books, papers, drafts, any content you want to import into Notebook, and then interact with it. This idea was partly inspired by writer Steven Johnson, who saves decades of notes and manuscript drafts; he hoped for a product where he could put all materials in and then interactively ask: What was I thinking in 1997? What did that draft say? Notebook LM eventually became such a powerful research tool; based on user-owned content, it generates summaries or drafts with attached citations, which is its core function. If it cites your content, it marks the source, and you can click to jump back to the original text, which is very practical.

Later we thought: sometimes I don't just want to read materials, I want to listen to them. So we added an AI audio overview function; the effect is like a podcast, with two hosts discussing and interpreting. This idea originally came from teams like Jeff Dean's; they had to read a large number of computer science papers every day and hoped to listen to paper summaries during their commute to filter content for intensive reading. Moreover, people learn better through conversation and discussion; this is the value of seminars, so the audio overview function was born, and the product truly exploded because of it. Every time I do an AI demo, I build a Notebook on the spot and then play the podcast; first-time contacts are always very shocked. Many audience members and listeners ask me: "Did you train it with your voice?" Because it sounds very much like me. I always say: No, it's just that it always starts with "Let's break this down"; almost all podcasts start this way.

Notebook LM also has a great usage scenario: you can import content in various formats — papers, YouTube videos, local files. I once used a scenario: processing papers from over 100 countries in different languages; after importing them all, direct cross-language interaction is possible. Now it also supports generating video overviews, complete with charts and slides. This is what happens in the lab: turning top results from DeepMind and Google Research into excellent AI-native products.

Another example is Flow (an AI filmmaking tool launched by Google Labs, driven by DeepMind's Veo, Imagen, and Gemini models, built for creatives, capable of turning text and images into coherent, high-quality video clips and complete scenes). Let me tell a small story: The first and last time I climbed a mountain in my life was Cotopaxi in Ecuador. I wanted to make a video to record it, but there were moments I didn't film because I wanted to focus on climbing. For example, my water bottle fell out of my backpack, rolled down the glacier, and disappeared into the dark. I wanted to use animation to restore this segment, so I used Google's video generation tool Flow, inputting instructions to generate a documentary-style animation to insert into the video. In the past, I would have had to find an animator specifically. Flow is a magical product born from the lab.

At that time, lab head Josh Woodward, Demis Hassabis, and a few of us gathered to discuss: If we integrate existing tools, what practical things can we make? The initial version was relatively rough; later we found real filmmakers to communicate with and get feedback. One of the lab's major characteristics is deep cooperation with creators, letting them help us polish the tools. Flow was born this way. You can prompt video generation shot by shot, and it supports coherent generation; this is also the source of the name "Flow". The initial version wasn't easy to use; filmmakers proposed: they need to create shot by shot, splice, and produce coherently. So we made optimizations.

The lab is advancing about 30 experimental projects simultaneously; you can see them on the Google Labs website.

Host: I have a request: expand the scope of openness. Many projects look very interesting, but every time it shows a waitlist.

James Manyika: We will try our best. For example, Pomello (an AI marketing tool for traditional small and medium-sized enterprises, jointly developed by Google Labs and DeepMind), a tool for SMEs, not those tech startups, but traditional small and medium enterprises, helping them quickly build creative online display pages. And AIR Studio (a no-code/low-code AI prototyping platform for developers), aimed at developers. We hope to build top-tier AI tools for all kinds of creators, such as developers, artists, filmmakers, and musicians.

20% Time Used for Innovation

Host: There are two products I particularly want to try, which might become the next Notebook LM: one is CC (a personal AI assistant and productivity agent based on Gemini, similar to a "Super Version of Notion AI + Personal Schedule Manager"), an experimental productivity agent inside Google; the other is Disco (a generative browser based on Gemini 3, with GenTabs as its core capability), where you can generate web applications based on a bunch of links. For example, if you plan weekend activities, open a bunch of web pages, it can automatically generate corresponding applications, such as a custom map marking various activity locations; you select a date, and it highlights items available that day.

I want to ask you both: In the past, Google had the so-called "20% time" mechanism, where employees could use 20% of their working time for projects outside their main duties; many blockbuster products like Gmail came from this. Who made these experimental projects? Has the 20% time returned in some form? How are so many interesting experiments advanced within the company?

James Manyika: I can answer first. This mechanism actually still exists. Back to the lab, about 80% of projects come from the lab team, and the other 20% come from 20% time projects.

Let me give an example in the field of education, which is also a direction Laya and I value highly. An employee at Google Research, whose main job has nothing to do with education, proposed an idea: Can we let people learn in ways that suit them? Current AI tools can already support diverse learning methods. This project eventually became "Learn Your Way" (an AI personalized learning experimental tool launched by Google Labs based on LearnLM; its core is turning static textbooks and materials into learning experiences adapted to grade levels, tailored to interests, and featuring multimodal interaction); you can find this experimental product in Google Labs. It was not made by the lab team, but was an idea from an employee in another department of the company. We continue to receive various excellent ideas from across the company.

Another example is Co-Scientist (a multi-agent scientific research collaboration system built by Google Research based on Gemini 2.0, positioned as a virtual research partner for human scientists; its core is simulating the complete scientific research process of "hypothesis generation — debate — verification — iteration" to help researchers accelerate discovery and break through thinking limitations), coming from DeepMind and Google Research. It is a tool to help scientists make scientific discoveries; it will be placed in the lab for testing and iteration later, but it was not built inside the lab. The mechanism for generating ideas from all company employees remains very active and has brought much exciting innovation.

Laya Ibrahim: DeepMind researchers also have the opportunity to build experimental products. This is actually part of our culture, giving everyone space to explore, and persisting in an interdisciplinary approach, not limited to researchers; this is very exciting. We gather different perspectives to solve real challenges. Sometimes it is even using AI tools to improve our own work efficiency: for example, letting the legal team review research papers and get feedback faster; doing more automated red team testing for the responsibility team; and interpreting ancient documents.

We have a project initiated independently by a researcher: not only focusing on today's intelligence, but also digging into forgotten historical knowledge. He led a project that can not only identify the age of clay tablets but also complete missing content and translate them. This is the Project ANEKS project (an AI research project of Google DeepMind), focusing on ancient document research. As James said, Google lacks neither smart, curious people, nor a company culture that supports such exploration.

Host: Let me explain why I pay so much attention to this point. In the last century, the average lifespan of S&P 500 companies (a stock index of 500 top listed companies in the US) was 67 years; now it is only 15 years. With the arrival of the AI era, changes will be faster; the ability to source ideas, experiment, and launch new projects is crucial for a company's long-term survival. So I am very concerned about how Google operates internally.

Laya Ibrahim: I used to work in venture capital; I once thought VC was the most amazing place because you could get in touch with entrepreneurs with bold ideas. But my feeling at Google is: innovation is part of the daily culture, happening in every department. It's just that the presentation of DeepMind and other Google departments is a bit different, but the entire company supports innovation.

James Manyika: Let me add one more point. Google's research culture is very unique. Going back to the Bell Labs you mentioned at the beginning, whether it is DeepMind or Google Research, we adhere to one concept: from research to reality. Many research breakthroughs are transformed into real-world impact very quickly. AlphaFold is a great example; it is a Nobel Prize-level breakthrough, and now over 190 countries and 3.5 million researchers worldwide are using it. There are also breakthroughs in the field of weather forecasting; they are now put into actual use. Our flood warning system already covers 150 countries and 2 billion people. Transforming scientific breakthroughs into social impact is a very unique point of ours.

Host: There is a question I must ask, otherwise the audience will ask me why I didn't. For many years, the outside world's impression of Google was "dare not release products." The most typical example: the Transformer model was invented by Google, while ChatGPT was the first mainstream application based on it. I interviewed Sam Altman at the end of the year; he said a sentence that attracted much attention at the time: If Google had valued us early on, they would have crushed us long ago, and now they are a powerful competitor. Is the matter of "releasing products" becoming more important inside Google? Is the ambition to push experiments to the public stronger?

James Manyika: I think yes, and this is a natural evolution. Google has always been generating a large number of research breakthroughs; there has always been a benign tension within us: Is the product ready? We cannot always make perfect judgments, but I think this tension is a good thing, reflecting "boldness coexisting with responsibility." At the same time, we also realize: for many experiments and innovations, only by letting people use and experience them can we learn. This returns to the scientific method. We do a lot of red team testing for products, but real user usage, even malicious usage, allows us to learn more. This is an evolution: release useful products and learn from the release. We now often say "continuous delivery"; Gemini models iterate a new generation roughly every 5 to 6 months, which is the change you see.

AI and Education: Boost or Hidden Danger?

Host: AI and education are directions you are both very concerned about and have invested heavily in. One of your recent studies shows that 85% of students over 18 are using AI; I guess the remaining 15% are not telling the truth; 81% of teachers say they are using AI, far higher than the global public AI usage rate of 66%. AI is having a real impact on education. Starting from your perspective: Is this overall positive for education? There are also many critical voices, such as students using AI to cheat, and teachers grading homework generated by cheating. What is the actual situation?

Laya Ibrahim: First, as James said earlier, this is a very important field. Our approach to it is consistent with other fields: we must think boldly about how AI can change learning methods and unleash human potential, while also remaining responsible, identifying risks, and investing resources to reduce them. In our survey, we also found that about 80% of adult learners believe AI is helpful for learning; it can provide information in a suitable form and at the needed time. One of the directions we focus on is making AI not just give answers, but guide you to break down problems step by step. All of this is built on the scientific method.

Three years ago, we decided to treat learning as a first-class scientific problem to research: How do humans learn? Google has relevant experience and expertise internally, and at the same time, there are a large number of researchers worldwide doing this. We very carefully cooperated with education experts and global educators to launch Learn LM.

This year, we fully injected this capability into Gemini and launched guided learning functions in the Gemini App to help users break down problems step by step, teaching you how to learn and how to analyze. I am also a parent of a teenager, often doing "A/B testing."

Host: You should let one use AI and one not, to see who ends up better.

Laya Ibrahim: I will add this to the next round of experiments. One of my daughters has dyslexia; the existing education system is not suitable for her. But I found that when she integrated AI into her learning, whether breaking down math problems or helping her organize chaotic thoughts into smooth text, she felt an unprecedented sense of confidence. I also have a sister with a physical disability; the education system was not designed for her either. Looking at the world, too many students are left behind because they lack suitable technical tools.

Our vision is: to give every student a personalized tutor and every teacher a teaching assistant. AI is a productivity tool; it can change the mode of teacher-student interaction. We are not saying AI is magic; the teacher is the core, but AI can liberate teachers, allowing them to return to real human-to-human interaction. We have already seen good progress in teacher productivity tools. I just went to Northern Ireland, where local teachers and the government collaborated on a pilot; their sticky notes were filled with gains: on average, each teacher saves 10 hours per week, using the extra time to accompany their families and design lesson plans for over 30 students with different needs in their class. This is very inspiring. But we are still in the early stages; we must realize this matter is significant, relating to a person's entire life. Helping them learn, opening up opportunities, and learning from this to feed back into research is crucial.

James Manyika: Let me add a point. We found that, like other sectors of society, the education field: when new technology arrives, it cannot simply be grafted onto existing processes, but workflows must be redesigned. Take learning as an example: everyone is very worried about cheating. In a world where AI is popular, perhaps we should no longer use traditional ways to examine and assess. Some school districts found that when students use guided learning, they are truly learning, and their knowledge mastery improves; but if it is just to rush homework overnight, they won't use it seriously. So these school districts conducted experiments: increase weekly tests. Students might collapse hearing they have more exams. But the result was: with more tests, students actively used guided learning for longer to prepare, and the learning effect was actually better. This is an example of why we need to reimagine the learning process, rather than forcing technology into existing structures. Through communication with teachers, schools, and school districts, we have obtained many interesting experimental and innovative conclusions. We are still in a very early stage, but concerns about issues like cognitive offloading are real, and we must take them seriously.

Host: I want to continue discussing this point. Like many technologies, especially AI, what people worry about is: ambitious people will use it correctly, and their abilities will improve greatly; while those who use it wrongly or don't use it at all will see the gap widen. The New York Times recently had an article stating that not only students, but teachers are also using ChatGPT, and some students are dissatisfied with this. Students at Northeastern University found spelling errors in professors' slides and extra limbs in images; these are traces of AI generation. How do you view this problem that may exacerbate social stratification?

Laya Ibrahim: This reminds me of when computers were introduced into classrooms and universities. We can learn a lot of experience from that history. On one hand, we can proactively do some things; on the other hand, we are also gathering leaders from all sides to discuss responses at the system level. We bring managers together to discuss establishing frameworks for responsible technology use within their respective institutions.

The current situation is a bit chaotic; everyone is doing their own thing, and we need an exploratory consensus: AI will not disappear, and fair access opportunities and literacy are crucial. Some students use AI to get ahead, while some students dare not use it for fear of being seen as cheating; this creates differentiation, and we have also observed gender differences. What we can do is bring leaders together to discuss how to open a new chapter, how to establish guardrails while maximizing benefits, and reduce risks. At the end of last year, James, I, and several colleagues jointly held an event to share best practices, exchanging what works and what doesn't; our researchers also participated. We also provide practical training for teachers, teaching them to use tools responsibly. This is more about unleashing productivity and potential rather than replacement. The design of incentive mechanisms must also keep up; this is without a doubt.

Frontier Tech Progress: Quantum Computing, Materials Science, Weather Forecasting, Space Program

Host: May I ask James: What is the current status of quantum computing? Its development speed is faster than many people expected.

James Manyika: We have a top-tier quantum AI team doing groundbreaking work. Overall, the progress of quantum computing is faster than public perception. The ultimate goal of quantum computing is to build fully fault-tolerant quantum computers; there are many routes. The mainstream direction is superconducting qubits, which is what our team is working on; many teams worldwide are researching this path. The complexity is high, but it is considered the most promising direction. In addition, there are multiple technical routes such as neutral atoms.

Specific progress: huge progress in underlying chips. For example, our Willow chip achieved a major milestone a year and a half ago. It completed a benchmark test called RCS; a top-tier classical supercomputer would need 10 billion years to complete it, while it took less than 5 minutes, and it can also correct errors in a groundbreaking way.

Another core obstacle that quantum computing has always faced is smooth error correction: how to reduce error rates while scaling up and increasing qubits. This is a real breakthrough, and the reason we won the Breakthrough of the Year award; it is the first time we proved that threshold-below error correction can be achieved — as the system scales, the error rate decreases, which is exactly the result we want.

Another major breakthrough occurred at the end of last year: previously, all benchmark tests, including the one I just mentioned, were only used for benchmarks and had no practical use. But last year, we achieved useful computation for the first time, namely Quantum Echoes; related results made the cover of "Nature." It completed a useful calculation: studying the spin dynamics of molecules, which is impossible to achieve by other means. We also collaborated with a team at Berkeley; they verified the results in the lab through nuclear magnetic resonance experiments. This is the first case of practical quantum computing.

Comprehensively, the progress of quantum computing is much faster than what everyone thought was "decades away." In about five years, we will start to see practical applications of quantum computing, which is very exciting.

Host: Materials science is a relatively neglected field in AI research; AI can discover new materials through prediction technology. Laya, introduce the current progress.

Laya Ibrahim: This returns to our core idea: What root problems can AI help us solve, deepening our fundamental understanding of the world, thereby opening doors for the entire field? AlphaFold is one of them. The AlphaGeometry you mentioned (an AI system developed by DeepMind that can automatically solve high-difficulty Olympic geometry proof problems, reaching the level of an International Mathematical Olympiad "IMO" gold medalist), and our materials science project, are all very exciting. We have expanded the known 40,000 stable crystals to over 400,000, and are currently testing them in labs and research. What does this mean? You can imagine better electric vehicle batteries, superconductors for supercomputers. Many breakthroughs rely on new materials to be realized. We are still in the early stages, but we believe this is a very promising direction that could change how we live and work.

Host: After discovering new materials, what will it bring? For example, materials as thin as T-shirts but with warmth comparable to winter clothing?

Laya Ibrahim: Exactly. Everything around you can be reimagined through new materials. For example, batteries, electric vehicles; how to make the body lighter, range longer, charging faster, breaking existing physical limits. These are all possible to achieve through breakthroughs in basic materials.

Host: Next is weather forecasting; Google is deeply cultivating AI meteorology in many directions.

James Manyika: We have a very large weather project, jointly advanced by DeepMind and Google Research. There are many dimensions to weather forecasting: ordinary weather forecasts, how the weather will be next week or tomorrow. Graphcast (a global medium-term weather forecasting AI model based on Graph Neural Networks "GNN" launched by Google DeepMind in 2023, a milestone breakthrough in the field of weather forecasting) comes from DeepMind and is currently a top-tier model in the industry. We are also predicting other weather events: monsoons, hurricanes, floods, and other extreme weather.

Let me give an example affecting life safety: The industry has long known that if flood warnings can be issued more than 6 days in advance, lives can be saved. The UN estimates this could reduce disaster losses by half. This has always been a difficult problem. Two and a half years ago, our team built a model to predict river floods, successfully piloted in Bangladesh. Today, our flood prediction covers 150 countries and 2 billion people. This is a typical case of moving from breakthrough innovation to actual social value. We also collaborate with the National Hurricane Center; we can predict 50 different paths of a hurricane 15 days in advance, and successfully tracked Hurricane Melissa. This kind of information is of great significance for disaster emergency response and can also be used for daily scenarios like flight scheduling.

Host: The last project: What is Project Suncatcher (Google's "Space AI Data Center" plan, deploying a constellation of solar satellites in Earth orbit, equipped with TPUs, using infinite solar energy in space for AI computing)?

James Manyika: This is a typical Google-style crazy conception. We thought: How do we train AI systems today? In 100 years, considering the computing power and energy needed to train models, how will it be done? In 100 years, we will definitely train in space; after all, the sun's energy is a million billion times that of Earth, and it is uninterrupted 24 hours a day. Why not move towards this future now? Project Suncatcher is such a Google-style moonshot.

We have completed several key milestones, planning to send TPU, a dedicated AI chip, into space for training. We are really going to send chips into space. The first milestone is that we hope to complete several training missions in space by 2027. This is Project Suncatcher, step by step moving towards that future. Some people may associate it with the Dyson Sphere (a sci-fi level giant engineering concept proposed by physicist Freeman Dyson in 1960; the core is to completely enclose a star with a huge structure to capture almost all its energy, a标志性 energy solution for Type II civilizations), utilizing the energy of the solar system or even the galaxy. A former Google employee once proposed: If we are moving towards AGI, Earth might be covered with data centers; but if we put data centers in space, Earth can be left for human life. Please stay tuned; our next milestone is in 2027, hoping to complete space training.

Reference Link:

https://www.youtube.com/watch?v=MkZRak7lVcA

Disclaimer: This article is organized by AI Frontline and does not represent the platform's views. Reproduction without permission is prohibited.


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.