Achieving AGI Requires Passing Two Major Thresholds | Sam Altman's Latest Dialogue Transcript

图片图片

On March 12, OpenAI CEO Sam Altman held a roundtable discussion at the BlackRock US Infrastructure Summit in Washington, D.C. The dialogue delved into the turning point where AI generates significant economic utility, quantitative thresholds for achieving AGI and the 2028 singularity prediction, the evolution of computing power as a utility, construction details of the super‑large‑scale infrastructure Stargate project, strategic layout of self‑developed inference chips, and the current state of global AI competition.

Sam Altman pointed out that models have already crossed the threshold of generating significant economic utility, shifting from a backlog of technology to an explosion of actual work capability. He proposed two clear quantitative thresholds for AGI: first, by the end of 2028, the cognitive capacity housed inside data centers could exceed the total of external humanity; second, when decision‑makers of large organizations (such as major nations or multinational corporations) would be unable to conduct their work without heavy reliance on AI. He believes the future management paradigm will undergo a complete transformation, with humanity’s role shifting from directly handling tasks to overseeing a team of AI agents that operate around the clock and possess full situational awareness.

Sam Altman believes that future core competitiveness will no longer lie in algorithmic secrets, but in large‑scale industrial integration capabilities, deep utilization of training data, and the total volume of infrastructure. He stresses that intelligence must become as cheap as electricity—‘so cheap it need not be metered’—and OpenAI is building gigawatt‑level super data centers through the Stargate project while developing low‑cost, application‑specific inference chips for Agents, aiming to achieve a 1,000‑fold cost reduction via dual optimization of algorithms and engineering. He also predicts that we may see a productivity explosion and continual rises in quality of life, yet GDP could keep falling—a long‑term deflationary phenomenon.

Regarding the global AI competitive landscape, he argues that the United States’ current weakest point is its reliance on global supply chains and the lag in economic adoption of AI. He warns that if the U.S. cannot rapidly diffuse AI across its economy, it will lose its advantage.

01

Where Do We Stand in Today’s AI World?

In the current AI evolution, where do we stand? Have enterprises truly understood and re‑imagined the model of AI‑enabled business?

Sam Altman: At some point in the past few months we indeed crossed a threshold, entering the phase where models generate significant economic utility. Perhaps this shift occurred earlier, but before we truly figured out how to use these models, the industry was plagued by a huge technical backlog. We need not only to keep raising the models’ intelligence level, but also to wire up the underlying interface infrastructure so they become simple and easy to use.

Now, the work capability demonstrated by these models has astonished everyone. This change is most evident in programming, but it is also occurring at astonishing speed in scientific research and many knowledge‑work domains. People often exclaim, ‘Goodness, I thought this would take several more years, yet it’s already here.’ My own work has shifted as well—from directly handling technical or legal matters to managing a team of AI agents that carry out those tasks.

This is merely the beginning; we are currently on the steepest part of the growth curve. At present you might trust an AI software engineer to finish a few‑hour task, but soon that deadline will stretch to days, even weeks. Shortly thereafter the paradigm will shift again. AI will deeply penetrate your life and company, proactively thinking, operating around the clock, and mastering all the contextual information needed—just as you would trust a senior employee to handle affairs.

(On corporate enablement) Some companies get it, others don’t. It is certain that the mindset of the new generation of start‑ups is completely different from before. In the past, start‑ups came to us asking how many employees they needed to hire; now they usually don’t want to hire many people, believing that would slow them down. Their current focus is on how much compute they can obtain—asking whether they can reserve capacity, sign cloud‑service agreements, or acquire a certain number of Tokens. This shift in mindset is slower in large corporations, but some enterprises have already begun to act. A clear signal is that some engineering and product teams are discussing increasing this year’s output by two‑ to three‑fold, something that would have been unimaginable before.

02 Redefining AGI and the 2028 Singularity

Regarding when AGI will arrive, and as CEO, what is your dependence on AI and Agents in your daily work?

At this point, the definition of AGI is crucial. Some think we have already achieved it, some believe it is just around the corner, others say it may still be a year away. In short, the term AGI is rapidly losing its specific practical meaning. Two thresholds are interesting. First, when will the cognitive capacity housed inside data centers exceed the total of external humanity? Although the margin of error is large and I could be mistaken, this could happen by the end of 2028. That would be an extraordinary global transformation.

The second threshold is when the CEO of a large corporation, the head of a major nation, or a Nobel laureate will find themselves unable to conduct their work without heavily relying on AI. This does not imply that AI will become CEO or president; human roles remain irreplaceable, requiring people to bear decision‑making responsibility, exercise human judgment, and provide the various understandings needed to run important organizations. However, in my role, a substantial portion of my actual work will inevitably depend on AI, because no manager can personally converse with every employee and client, attend every meeting, and be an expert in every field. Consequently, future management will increasingly involve supervising a group of AI, providing direction, and deciding how to trust their outputs. When you realize that running a large organization without heavy reliance on AI is essentially impossible, that marks another interesting threshold. Its arrival may take some time, but it will not be long.

(On personal work dependence) This dependence is rising extremely fast. When I have a new idea about a business model, strategic shift, or product plan, the first thing I do before discussing it with anyone is to query our tools. As they gain more context, this will become the next major breakthrough. When AI can approximate the full context of the company—including internal documents, communication logs, code, customer data, etc.—the quality of its answers and depth of its reasoning will keep improving.

03 Intelligence Will Become as Cheap as Electricity—So Cheap It Need Not Be Metered

OpenAI has completed an unprecedented $110 billion financing round and brought in multiple strategic partners. How will these massive funds be used? How do you view the relationship between compute and revenue?

This field presents many difficulties, but the hardest part is that infrastructure is simply too expensive. You need not only massive investment but also long‑term forecasting and commitments far in advance. I have never seen an industry like this. History does have many capital‑intensive sectors, yet looking ahead a few years, if the growth curve stays as it is and demand keeps surging, we must resort to some unconventional measures.

Many of OpenAI’s actions look strange to outsiders. We pour huge sums into infrastructure before generating any revenue, and we experiment with new business models such as advertising—even if they don’t appear to be the most profitable choices. We do this because we firmly believe intelligence will enter an age of abundance. One of the most important goals is to make intelligence as cheap as the old energy‑industry slogan: ‘so cheap it need not be metered.’ We want intelligence to permeate the world so that anyone can use it in any scenario. We hope this becomes something future generations take for granted—just as they assume intelligence is everywhere and that everyone can access enough brilliant assistants in any field. This core principle guides many of our seemingly unconventional actions. The most crucial point is that we must escape the capacity‑constrained trap; otherwise we will remain stuck there.

(On compute as revenue) Fundamentally, our business—and that of all model providers—is essentially selling Tokens. These Tokens may come from models of different sizes, with varying prices, involve different levels of reasoning, and incur different costs. Some AI may run continuously in the background serving you, while others operate only when called on demand. For extremely valuable problems, we might even devote tens of millions, hundreds of millions, or even billions of dollars of compute resources to solve them.

We anticipate that intelligence will become a utility like electricity or tap water, with users paying for what they consume. This demand is growing exponentially. If our compute falls short, we either cannot supply—causing prices to spike and turning intelligence into a privilege of the rich—or society must make complex allocation decisions, such as allocating limited compute to one project instead of another. Looking over the history of innovation, the best solution has always been to supply the market with ample product.

04 Stargate Progress

The Stargate project, as a massive infrastructure initiative, what is its current progress in the United States and globally? What have been the biggest surprises and challenges during construction and expansion?

To be honest, this is one of the coolest parts of my job—visiting those super data centers under construction and operation. The scale of the gigawatt‑level campuses is hard to put into words. You might look at photos and think they’re big, but only when you’re on the ground, moving between buildings, seeing ten thousand technicians of various trades busy at work, and observing an interior that resembles a spaceship, do you truly feel awestruck.

We are currently training the next‑generation model at the first Abilene site, hoping to pull far ahead of the world’s level. After multiple site visits during the construction phase, once you truly internalize that scale and complexity, there will come a day when an OpenAI researcher types a command, hits enter, and tens of thousands of GPUs fire up instantly to begin this enormous single computation—it feels fantastic.

图片

(Construction challenges) Expected challenges have come one after another, plus unknown variables. For example, Abilene once experienced an extreme weather event far beyond any contingency plan, halting the project for a while. There are also supply‑chain challenges. At such a massive scale, any link can fail. While progress is often smooth, building such a complex system around the clock inevitably brings surprises. The most delightful aspect is that, in a very short time, so many different organizations have been able to work together tightly under pressure, ultimately fighting as a single team.

05 Model‑Efficiency Revolution Amid an Energy Crisis

Given the global concern over electricity shortages, are you optimistic that humanity can solve this problem, and do you expect technological breakthroughs in model efficiency to alleviate energy pressure?

In the long run I am very optimistic. I have no doubt that humanity can find ways to generate electricity at scale, and AI itself will help. We have a mix of natural gas, solar, nuclear fission, nuclear fusion, and other technologies; I am confident about future electricity supply capacity.

Given how rapidly demand is growing, I actually also hope to see a miracle in model efficiency—namely, how to boost the model’s performance per watt dramatically, buying time for infrastructure construction. Indeed, our record in this area is astonishing. For example, our first reasoning model, o1, was created about 16 months ago, and we have now integrated its reasoning capability into the latest model. To solve the same problem, the cost has dropped a full 1,000‑fold from that initial version to today’s. Achieving a thousand‑fold cost reduction in such a short span is simply unbelievable.

This indicates two things: first, we are still early in this paradigm, with huge room for improvement in model development, training, and efficient operation. Some of our past and present practices are actually rather clumsy; future ones will be smarter. Second, human creativity under constraints continually brings surprises—not just in the model itself, but also from the joint efforts of kernel engineers, power engineers, and data‑center designers.

06 In an Energy‑Constrained Future, Developing High‑Efficiency‑Ratio Chips for Agent Needs

OpenAI previously was a major customer of chip giants and had contracts with cloud providers, yet now it chooses to develop its own chips. What is the rationale? Could you explain the difference between inference and training chips, and share the current R&D progress?

The chip we are developing is inference‑only. The logic is that, to tackle future challenges of all kinds, we need a chip suited to specific scenarios. It need not be the fastest, but it must have the lowest cost and the highest performance per watt—crucial for meeting the massive future demand for AI Agents. It is a bet backed by clear judgment. Although it is a functionally limited chip, in an energy‑constrained world its role will be pivotal.

(On the difference between inference and training) AI workloads mainly split into two stages. First is the training phase, where massive GPUs compute on huge data sets for weeks or months. You can liken it to human education: you might need 22 years, starting from infancy to learn the laws of everything, through university to deeply master physics. After training, if you ask the model to solve a physics problem, that process is called inference. Inference is highly efficient, just as it is for humans. When people marvel at AI model efficiency, they usually compare humanity’s 22‑year training process with an adult solving a problem in one second. If we only compare the problem‑solving step, today’s models may already exceed human energy efficiency. However, training does require enormous compute, and its output is essentially a file storing vast numbers that you can query to get a response.

(On chip progress) Yes. The first batch of chips should return samples in a few months. If everything goes well, we expect large‑scale deployment by the end of this year. Current progress looks very promising.

07 How Skilled Tradespeople Become Key to U.S. AI Infrastructure Build

You just announced a new partnership with the North American Building Trades Unions to expand skill‑training pathways in construction. What are the specifics of the agreement and the action plan?

We have discussed this before, and the world has repeatedly spoken about the need for AI infrastructure. The world needs tangible physical infrastructure: power plants, transmission lines, data‑center server rooms, cooling equipment, of course racks, GPUs, and all the components that go inside. I hope everyone can at some point visit these hyperscale data centers, because they are indeed highly complex. When you ask ChatGPT a question and receive an answer, it’s hard to intuitively sense the enormous scale required to make that happen.

People often say we are limited in many areas—sometimes by the number of turbines, sometimes by voltage transformers, or by semiconductor‑fab factories, or even by the pace of data‑center construction. All of these share a common trait: they are immensely complex physical infrastructures that require large numbers of skilled tradespeople to build. Whenever supply‑chain bottlenecks appear, whenever I discuss with someone how to accelerate the process, the answer is always that we need more skilled tradespeople to construct the infrastructure on which we all depend. I see excellent prospects for these roles. More importantly, these jobs will lay the foundation for the next generation of U.S. infrastructure and economic prosperity, and we are delighted to work together to push this forward.

08 The Core of Global AI Competition Will Return to Industrial Capacity and Infrastructure Volume

In the global technology competition, what is the current situation for the United States, and how should it act to maintain and extend its lead?

I’ll first share a broad thinking framework, then answer specifically. I believe the discovery of deep learning is closer to discovering an element or a fundamental property of physics than to mastering a secret technology. This means that eventually—and perhaps soon—the core idea behind building powerful models will be simplified and widely known. Just as we understand the basic laws of physics, we will come to grasp AI’s core operation from first principles. We felt this deeply from OpenAI’s Scaling Law published roughly seven years ago. The resources poured into a model and the intelligence it ultimately exhibits are linked by an extraordinarily precise and elegant correlation; at the time, this principle‑level necessity felt a bit chilling, but it was perfectly clear. Although we have since uncovered many details and will discover more, just like other scientific frontiers, it will become simpler and clearer over time. Ultimately, this formulation will be understood as a scientific principle rather than a long‑held trade secret.

(On the competitive situation) As of now, the United States leads on the world’s most advanced frontier models. In the use of cheaper, older‑generation models for inference, other leading economies are strong. In infrastructure, the U.S. is currently ahead, but other regions are catching up very quickly. In industrialization and productization, the U.S. leads in closed‑source, while other competitors excel in open‑source. Overall, I think the United States may retain its lead overall.

09 India’s AI Market

You recently visited India and sounded excited about how India is tackling AI challenges and opportunities. How does that differ from your experience when interacting with customers in the United States?

After talking with Indian start‑ups, I was deeply impressed. They apply this technology exceptionally well; when I first arrived, a briefing showed that Codex usage in India had grown tenfold in just a few months. I initially thought the data might be erroneous, but it is indeed true.

In conversations with these start‑ups, I noticed they exhibit even stronger momentum than their U.S. counterparts. Some will say the world has changed, and while everyone talks about one‑person start‑ups, they are attempting to build a “zero‑person start‑up.” They aim to drive the entire company with a single prompt, letting AI write software, handle customer support and legal matters, then go on vacation.

Indian large companies display astonishing ambition and speed. They ask, ‘How much capacity can we buy? For how long can we reserve? Can we negotiate right now?’ Before reaching an agreement, they often don’t even want you to leave the room. That steadbelief that AI will reshape India’s business landscape is truly impressive. The overall trend is the same, but they seem to go further and move faster.

10 Three Major Bottlenecks in the U.S. AI Race

Returning to the global AI race, where do you see the United States’ weakest link, and what must we do to secure our lead?

I came up with three points. First is global supply‑chain dependence and the construction of U.S. infrastructure—although this is old news, I must stress that it genuinely worries me. If our infrastructure lags and we cannot overtake, or if globalization collapses in some way preventing us from independently building AI infrastructure, that would be a huge vulnerability. Regarding our current global standing, I’m not dissatisfied, but I’m not satisfied either.

Second is the speed of economic adoption. If we do not diffuse AI across the economy as quickly as other nations, we will lose the inherent advantage of being an economic powerhouse. This concerns how fast enterprises, scientists, and even the government adopt the technology. On the bright side, this is a rare opportunity in many decades to rejuvenate the economy. If handled well, it will not be a weakness but could become our greatest competitive edge.

(On current resistance) As of now, it’s unclear whether we are on the ideal trajectory; while I don’t deny the present situation, I feel we could move faster. There is also considerable pushback—for example, AI’s reputation in the U.S. is not stellar, data centers are blamed for raising electricity prices, nearly every laying‑off company cites AI as a scapegoat regardless of truth, and the ongoing power struggle between government and corporations continues.

Third is the global diffusion of technology. The future AI stack—chips, models, applications, etc.—will it primarily be built on the U.S. system, or will certain of our policies push it toward the opposite camp?

11 If AI Becomes the Cornerstone of a Productivity Explosion, How Would You Measure It?

If AI becomes the foundation of a productivity explosion, how would you measure that productivity? You previously said the measurement might need to change—please explain the logic.

That’s right, certainly—but I also think the way we measure productivity will have to change. I envision a world where productivity explodes, quality of life keeps rising, and the vast majority of things we care about keep improving, yet by today’s metrics GDP would keep falling. This resembles a long‑term deflationary process. I don’t know what life feels like in a permanently deflationary world, nor do I know how the link between GDP and quality of life would shift in a world where intellectual capital is concentrated more in data centers than outside. But perhaps we will soon find answers. In the coming years, there will surely be extensive debate over what the proper metric should be.

I feel this line of thinking is already underway. If a simple consensus answer had existed long ago, we would have solved it by now, so presently nobody knows exactly what to do. The norms our society has relied on for survival seem to be coming under question all at once. A few weeks ago I saw a sentence online that struck me deeply: for hundreds, even thousands of years we have been learning how to build societies to manage scarcity, whereas when we need to shift rapidly to managing abundance, past experience is almost useless.

• END


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.