Altman Reveals the Truth: Why Sora Was Shut Down and the Pentagon's Ban on Claude

Reported by Xinzhiyuan

Editor: Alan

[Executive Summary]
A transcript of Sam Altman's latest interview: Power, parenting, and the crossroads of AI. We have summarized this one-hour deep dive to uncover the most critical insights.

On April 2, 2026, Laurie Segall, host of the tech podcast Mostly Human, released a heavyweight conversation—an in-depth interview with OpenAI CEO Sam Altman.

Interview visualization

This marks Altman's first extended public interview since the Pentagon agreement controversy in February. It comes during a turbulent period for OpenAI, which recently completed a $122 billion funding round at an $852 billion valuation, shut down Sora, and experienced significant leadership churn.

Segall and Altman have known each other for nearly twenty years. When she first interviewed him, he was a young entrepreneur with a geolocation app called Loopt.

That was 2010, when the iPhone had just redefined the mobile internet and startups were as raw and chaotic as punk rock—no PR teams, no valuation myths, just a long bench and a camera.

Today, that same individual is at the helm of one of the most powerful technology companies in human history.

Altman admitted the stark contrast. "I wear more suits now than I did in the rest of my life combined," he said.

He noted that the cultural gap between politicians and entrepreneurs is "crazier than I previously imagined."

He even confessed to a disturbing discovery: no matter how high you climb in the hierarchy of power, you never find the "adult in the room." World leaders are equally riddled with uncertainty and insecurity, making momentous decisions while exhausted.

Illustration

"Disney, I'm Sorry"

One of the most dramatic segments of the podcast centered on the shutdown of Sora.

Late last year, OpenAI signed a landmark partnership with Disney. Disney was to invest $1 billion in OpenAI and license over 200 characters (including Iron Man, Cinderella, and Mickey Mouse) to generate AI videos on the Sora platform.

Three months later, OpenAI announced it was shutting down Sora. Reports indicated that Disney was notified less than an hour before the public announcement.

Altman shared in the podcast that he personally called Disney's new CEO, Josh D'Amaro. "The first thing he said was 'I understand,' but disappointing partners, users, and teams—who were all doing great work—is always sad."

Altman explained that the fundamental reason for shutting down Sora was resource allocation. The company had to concentrate compute and product capabilities on the next generation of "automated researchers" and "automated companies."

"The bottom-line footnote for all of this is always compute," he noted. He recalled similar trade-offs during the GPT-3 era, where OpenAI shut down several promising projects, including robotics, to prioritize language models.

He also revealed a deeper product consideration: to win in the short-form video space, OpenAI would have to enter the "attention economy" competition, which would force the company to make a series of decisions it was "unwilling to make."

In other words, Sora's demise was caused not only by a compute shortage but also by a rare form of self-restraint.

Illustration

The Pentagon Agreement: A "Miscalibration"

In late February, negotiations between Anthropic and the Pentagon over military AI contracts collapsed.

Anthropic refused a "all lawful uses" clause, insisting that AI should not be used for mass domestic surveillance or fully autonomous weapons.

Subsequently, President Trump posted on Truth Social ordering all federal agencies to immediately cease using Anthropic's technology, and Defense Secretary Pete Hegseth labeled Anthropic a "supply chain risk."

Hours later, OpenAI announced it had reached an agreement with the Pentagon to provide AI services within classified networks.

Altman admitted in the podcast that the timing and manner of the deal's launch were a "miscalibration."

"We really wanted to lower the temperature. It clearly had the opposite effect," he said. "I underestimated the intensity of current societal distrust."

However, on the substance of the matter, he did not concede.

"I believe AI companies cannot say to the government: 'This is the most powerful technology humans have ever built, it will be a key geopolitical variable, the most powerful cyber weapon in the world, and will determine the course of future wars—but we won't give it to you.'"

He pointed out that OpenAI set three "red lines" in the contract: no mass domestic surveillance, no autonomous weapon systems, and no high-risk automated decision-making.

OpenAI retains full control over the security stack; deployment is limited to the cloud and requires intervention from security-cleared OpenAI personnel.

He emphasized that throughout the process, he had opposed the government labeling Anthropic as a supply chain risk "publicly, privately, and loudly."

When asked if he feared the government might nationalize AI labs, Altman gave a sobering answer: "I would never say it's impossible." He added that in a well-functioning society, AI development should have been a government project—similar to the Manhattan Project, the Apollo Program, and the Eisenhower Interstate Highway System. "But this era is not like that."

Just one day before the podcast was released (April 3), a federal judge issued a preliminary injunction against the Pentagon's actions toward Anthropic, calling them a "classic First Amendment retaliation." Altman responded that he hopes both sides "stop escalating and find a way to cooperate."

Illustration

The One-Person Billion-Dollar Company

Another shocking claim in the podcast: Altman asserts that the first billion-dollar company created by a single founder using AI has already emerged.

"As far as I know, it has happened," he said. "I promised not to share details before he is ready, but I believe it has happened."

This assessment is not without basis. He mentioned OpenClaw, which skyrocketed to the top of the GitHub all-time star list. The creator, Peter, completed everything from coding to product development almost entirely using AI tools (especially Codex).

In February, Peter joined OpenAI, and OpenClaw transitioned into an independent open-source foundation.

Altman shared a personal experience: he had a list of side projects accumulated over years that he never had time to build. After Codex appeared, he finished all of them in a few weekends—and then found he had nothing left to do.

"There was one Friday night where I prepared to go to sleep without Codex running because I had no more new ideas. It was a strange feeling."

He views this as the starting point for his "Super App" vision: a personal agent integrating chat, programming, and browsing capabilities that can read your messages, attend your meetings, act autonomously—and even think of what to do when you can't.

Illustration

Raising Children in the AI Era

The most touching part of the podcast was Altman's discussion on parenting. Segall told him: "We are all raising little boys. But in a sense, you are also raising my son—because the technology you create will be woven into every corner of my son's life."

Altman's response was unexpectedly cautious. "I hope he doesn't use AI for now," he said of his own child. "I'd rather be on the reasonable late end than the early end. Of course, he will eventually live in a world where computers are smarter than he is. But right now, I just want him to play in the mud."

He revealed a private habit: after his son was born, he wrote a letter to the child every night, recording his decisions and confusions of the day. "You can't hide anything in things you write to your child," he said. "You become the most honest version of yourself." (This habit was later stopped on legal advice.)

The discussion shifted to whether AI robs children of the "friction" necessary for growth. Segall cited expert Dr. Becky, suggesting that AI provides academic and emotional shortcuts that could weaken independent thinking and resilience.

Altman didn't entirely agree. He proposed a "multiplayer game" theory: children in 2030 won't be competing with children from 2020, but with peers who also have AI tools. Tools raise the floor of capability, but they also raise the ceiling of expectations.

Yet, he admitted, "Ali (his husband) and I discuss this often." He noted that he and Segall might be "the last generation of children who grew up with boredom." While he hated boredom as a child, he now realizes it shaped him in strange and meaningful ways.

Illustration

Model Temperature: Too Warm or Too Cold

When discussing AI's emotional impact, Altman admitted a dilemma. OpenAI once had a model version that was widely loved but too "affirmative" (or sycophantic), which was later taken offline.

The intensity of user reactions shocked him—one message read: "This is the only positive voice in my life."

"It's heartbreaking," Altman said. "You could say we don't want a model that is too affirmative because it might negatively impact mental health. But another person says—I've never had confidence, my parents told me I was terrible, I had no friends in school, and because of this model, I found a job and a girlfriend. This is the most important thing in my life. Please don't take it away."

He admitted OpenAI has made decisions to "not do" certain things—even knowing that more permissive models could help people (like an Australian who used ChatGPT to create a custom mRNA vaccine for his dog)—because they chose to limit them to prevent risks like bio-terrorism.

"Ultimately, I think these decisions should be made by society. Just as aircraft safety regulations aren't set by the CEO of the aircraft manufacturer. But for now, we have to temporarily play that role."

Illustration

Deepfake Legislation and Disagreement

A notable clash occurred over deepfake pornography. Segall, who has focused on victims of AI-generated fake porn, questioned whether OpenAI's opposition to state-level legislation is justified.

Altman was blunt: "This might be where we disagree." He believes state-level laws are ineffective and advocates for a unified federal framework.

Segall countered that change often happens faster at the state level, and for victims whose only legal recourse is state law, this isn't a theoretical issue. This was a rare moment of genuine collision in the conversation, devoid of diplomatic phrasing.

Automated Researchers: The Most Critical Topic

In the lightning round, Altman gave several notable answers:

  • Most valuable AI-proof skill: "Caring for others."
  • OpenAI's biggest missed opportunity: "We once walked away from a compute deal we shouldn't have. A big one."
  • Is an IPO possible? "Possible. I'm not sure."
  • Will CEOs be replaced by AI? "At the skill level—very likely. But the world will demand a human be accountable for the decisions of such a company."
  • Last communication with Elon Musk: "A while ago, just exchanged some emojis."

The innovation he spent the most time on was the "most under-discussed" one: Automated Researchers.

"Ten years of scientific progress compressed into one year, then a hundred years compressed into one year. The change this brings to quality of life, the economy, and new risks—there is no analogy," he said.

He revealed that his most exciting meeting last week was with a physicist who used OpenAI's latest internal system and said: "I thought this would never happen. We will complete decades of theoretical physics progress in the next few years."

Illustration

Collapse or Foundation

On the day the podcast was released, major executive shifts were reported at OpenAI: COO Brad Lightcap transitioned to head of special projects, CMO Kate Rouch left for cancer treatment, and CEO of AGI development Fidji Simo took medical leave due to a neuro-immune disease.

Fidji Simo

During Simo's leave, co-founder and president Greg Brockman will take over product work.

These changes occur as OpenAI prepares for an IPO, has just secured $122 billion in funding, and is pushing forward with ChatGPT advertising.

Simo wrote in an internal memo: "The timing is terrible because the roadmap ahead is too exciting; I don't want to miss a minute. But I've pushed myself too far."

From the shutdown of Sora to the Pentagon controversy, and from executive leave to a trillion-dollar valuation, OpenAI is experiencing multiple organizational "puberty" phases—growing at an unprecedented speed and trying to tie its shoelaces while running.

Illustration

Personal Principles

Throughout the podcast, Segall repeatedly asked a core question: What do you actually stand for?

Altman responded with a string of keywords: democratization, empowerment, abundance, safety, and fair sharing. He admitted these principles create tension and that they "will have to change as technology progresses." He knows people dislike hearing this, "but it is the truth."

He also discussed a vision for a future social contract, supporting a system that allows everyone to hold some capital simply by virtue of citizenship—making more people holders of the "magic of capitalism" rather than mere observers.

As the interview ended, Segall played a recording from 2020 where Altman said: "What kind of society do we want to design? What role should humans play in the world? How do we ensure the world is good for humanity in the broadest sense? This is the biggest ethical question of our generation."

Six years later, Segall asked: Is that still your answer?

Altman replied: "No additions, no modifications."

References:

https://www.youtube.com/watch?v=mJSnn0GZmls

https://www.bloomberg.com/news/articles/2026-04-03/openai-coo-shifts-out-of-role-agi-ceo-taking-medical-leave

Related Articles

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.