Yesterday, OpenAI President Greg Brockman personally revealed a major model they have been researching for two years on the Big Technology Podcast—the Spud large model.
This large model, kept under wraps by OpenAI for two years, is reportedly the core direction the company has been betting on by concentrating its computing power after recently shutting down Sora.
"Spud," which translates literally to "potato" in Chinese, reminds this editor of OpenAI's previous codename "Strawberry," which was later named the o1 series.
So, will Spud be the next o1?
Spud: OpenAI's Secret Weapon
While "Spud" sounds like a humble and unpretentious name, according to The Information media and Greg Brockman's own disclosure, Spud is a new pre-training base.
Greg Brockman stated explicitly: "I view Spud as a new foundation, a new pre-training... I would say this model probably aggregates two years of our research results."
Regarding the model's progress, OpenAI CEO Sam Altman announced in an internal memo: The new generation AI model, codenamed "Spud," has completed pre-training. He described the model as "extremely powerful," expecting tangible results within the coming weeks, and noted it could "significantly accelerate the global economy."
To pave the way for Spud, OpenAI decisively shut down its video generation model, Sora, recently, freeing up valuable computing resources to support Spud and other priority projects. Furthermore, OpenAI plans to build a desktop-level "AI Super App" with Spud as its underlying architecture.
From the information gathered so far, Spud does not appear to be a minor upgrade.
The Smell of Large Models: Describing Functionality with Senses
After discussing Spud's origins, let's talk about its capabilities. According to Greg Brockman:
Unlocking New Capabilities: It can now do things it previously couldn't. Those frustrating moments where the AI "didn't quite get it" and required multiple explanations are disappearing.
Longer Time Horizons: AI is no longer just good at short tasks lasting a few minutes; it can autonomously handle complex, open-ended, multi-step long-term problems.
New Pre-training Foundation: Both Greg Brockman and Sam Altman have repeatedly emphasized that Spud's goal is far more than just making chatbots more usable; it is a new foundation aimed at accelerating the entire economy.
But what truly piqued this editor's curiosity was a new term used by Greg Brockman: "Big Model Smell."
What is "Big Model Smell"? Why are sensory words being applied to large models?
Greg's original description was roughly: "There is something called 'Big Model Smell'... When these models are actually smarter and more capable, they align with you more, and you can feel it."
In layman's terms, this is not data that can be objectively quantified, but rather a subjective feeling:
The AI model feels smooth to use; it understands the meaning the user intends to express without needing repeated prompt adjustments.
Netizens: The First Model That Truly "Thinks"!
As a mysterious large model, many netizens have expressed their expectations for Spud:
"It is the first model that truly 'thinks'."
"The taste of large models"—I like this framework. When the model stops resisting and starts thinking with you, you feel it immediately."
However, some netizens are puzzled: Is Spud GPT-5.5 or GPT-6?
This editor believes Spud is more likely a GPT-5.5 or similar iterative version, serving as a stepping stone for OpenAI towards GPT-6.
What do the experts in the comments section think? Will Spud be GPT-5.5, or will it leap directly into the GPT-6 era? What surprises will the new feature "Big Model Smell" bring?
Welcome to share your thoughts in the comments section!
Recommended Reading
CC's Father Responds: Source Code Leak Was Pure Human Error, Unintentional Mistake