OpenAI Codex Product Lead: Programming as We Know It Is Over

Image

The biggest bottleneck to AGI is not compute or model architecture, but an unexpected human weakness: our typing speed, laziness, and lack of imagination; traditional IDEs will be marginalized, and AI plan review and automated code review will become the new core skills. SaaS is not dead, but relationships and records are the new moat.

Latest interview with OpenAI Codex Product Lead Alexander Embiricos

A fundamental shift has occurred in OpenAI's internal development process. Embiricos reveals that OpenAI engineers now barely open editors anymore, with the workflow fully shifting from AI pair programming to task delegation. He also discloses for the first time a seemingly contradictory business strategy at OpenAI—by establishing open standards like agents.md and providing their most advanced models to direct competitors, they are playing a larger game about intelligence distribution.

Image

Here are some points from this interview you might find interesting

1. Will AI Automate Programming? An Answer About the Etymology of 'Computer'

When asked whether he agrees with Musk's assertion that 'programming will be one of the first professions to be automated at scale', Embiricos gave an affirmative answer but with a historical footnote.

"I completely agree that programming is one of the first areas where large language models excel, but what does it mean for programming to be automated?"

To explain his point, he invited listeners on a thought experiment back to the early evolution of programming languages. 'For example, when we moved from writing assembly language to higher-level languages, did we say programming was automated? No.' he pointed out. 'We were simply able to write much more code, and as a result, the demand for code actually surged, requiring more software engineers.'

He further traced the origin of the word 'Computer'. At Bletchley Park, where the German 'Enigma' cipher machine was deciphered, the original 'Computer' referred to a group of humans performing calculation work, manually punching holes, operating machines, and doing tabular calculations. Similarly, the design inspiration for the earliest spreadsheet software also came from an office filled with employees arranged like a grid, each person completing their own calculation and passing the result to the next person.

Image

"All these specific tasks have been automated," Embiricos summarized. "But every time such automation occurs, the demand for the final output experiences an explosive growth. As a result, you actually need more people to do this kind of work, even though the specific task content has changed."

Based on this, he made a counterintuitive prediction: 'I think we will have more engineers in five years, not fewer.'

2. Compression of the Talent Stack: The Future of Engineers, Designers, and (Maybe Unnecessary) PMs

Embiricos believes that future changes are not only about the increase or decrease in the number of engineers, but more about the fusion and definition of work roles.

"We sometimes change the meaning of words, just like the word 'Computer' now refers to a machine," he said. "Now we have the term software engineer, and I firmly believe there will be more 'builders' in the future."

A core trend he observed is the 'compression of the talent stack'. 'In the past, you had many clearly divided roles, such as back-end engineers and front-end engineers. But now, at least in our Codex team, this is much less common, and everyone's work has become more full-stack.'

This compression means that future engineers may be a super individual combining design, product thinking, and coding abilities. When you mention engineer, you might be thinking of a more versatile person than ever before." Embiricos explained.

As for his own role as a PM, he half-jokingly expressed a pessimistic view. 'I tend to think that the PM role is actually intentionally undefined; your goal is to adapt to any need of the team or business.' he admitted. A great product manager can help the team take a step back, foresee future risks, become the team's best cheerleader and quality gatekeeper. But all these things I just described, a strong engineering leader or a product-savvy designer can also do completely.

3. The Real Bottleneck to AGI: Human Laziness and Imagination

The conversation turned to a grander topic: the bottleneck of AGI. Embiricos threw out a viewpoint: 'Human typing speed and validation work are the key bottlenecks to AGI, not model compute or architecture.'

To prove this, he conducted a simple live interaction. 'How many times do you use AI today?' he asked. When given an answer of over 30 times, he immediately followed up: 'Assuming it costs you nothing, how many times do you think AI could help you per day?'

The answer is obvious: almost infinitely many times, AI can run on everything around the clock. 'I hear engineers inside and outside OpenAI saying, "I always have Codex running; if it's not working during a meeting, I'm wasting time."' Embiricos shared. 'This is very cool, but it also requires a lot of effort to manage these agents.'

This is the core of the problem. 'Even myself, I know I should use AI for everything, but I'm too lazy to input so many prompts, and my creativity is limited; I can't think of all the ways AI could help me.' he said. 'I think AI should help us thousands of times a day, but we shouldn't expect most people to have to spend so much effort learning how to use this tool to benefit from AGI. It should be effortless.'

He believes that in the ideal future, using AI will no longer require racking your brain to come up with perfect prompts; AI should proactively connect to your context and provide help when appropriate.

4. Individual Empowerment vs Enterprise Automation

Embiricos emphasized that the best path to achieving AI for all is to build tools for individuals, not top-down enterprise automation. This view immediately sparked discussion about the practical issues of enterprise implementation, especially data security, permission management, and reliance on on-site implementation engineers.

"If you try to achieve it in one step, implementing some grand workflow automation system, then you inevitably have to deal with all these security and compliance obstacles." he acknowledged. "You need on-site implementation engineers to connect all data systems. But my observation is that when we do these things top-down, we ultimately greatly underestimate the potential of AI to help this company."

He advocated for a dual-track strategy. "You can carry out enterprise-level deployment while also putting AI into the hands of those who are actually working on the front lines." He illustrated with an example: "Imagine you are a customer service agent, and AI is automating a significant part of your work, but you've never heard of ChatGPT and are not allowed to use it. In this case, you have no intuition about this thing. Conversely, if while using the company's automation tool, you personally have been using ChatGPT at work, you will have a deeper understanding of how it works, you will feel more in control, better able to guide the direction of automation, rather than feeling it is like a completely uncontrollable alien object."

He further pointed out that all enterprise software will eventually converge on the individual user's interface. "Regardless, everything will ultimately fall to an interface where an agent running on your local computer can interact, such as your browser or file system." He then revealed a key piece of information: "This is why OpenAI is building our own browser, 'Atlas'. Through end-to-end strict control, we can provide a secure, agent-based browsing experience for enterprises, which is a way to access various systems in an agent manner before on-site implementation engineers have completed full deployment."

5. Three Stages of Agent Development

So, how to cross this human bottleneck? Embiricos proposed a three-stage development path.

Stage 1: Mastery of Software Engineering. "First, let agents excel in software engineering and coding, as LLMs are particularly good at this." This is the stage currently underway, laying a solid foundation for agents.

Stage 2: Empowering Explorers. "Next, we need to recognize that making an agent more general, using the computer is its core capability, and coding is the best way for an agent to use a computer." he explained. This means opening the flexible tools built for engineers to any non-coder willing to explore and tinker. "We have already seen people use the Codex app to handle various non-coding tasks." The core of this stage is not over-encapsulating the product for specific workflows, but providing an open tool for early users to discover its uses creatively.

Stage 3: Seamless Productization. "Finally, once we see which uses are effective, we can build the highly productized experience you mentioned." In this stage, AI capabilities will be packaged into products that are ready-to-use for specific scenarios, benefiting ordinary users without any learning cost.

"I believe we will complete the entire journey of stages 1, 2, and 3 at an extremely fast pace in the coming months." he predicted.

6. How GPT-5.2 Codex Changed Everything for OpenAI

Embiricos detailed a step-change transformation in OpenAI's internal workflow, catalyzed by the release of GPT-5.2 Codex.

"Before GPT-5.2 Codex, the AI programming features we used were mainly like Tab autocomplete or 'pair programming' with the model." he recalled. "In that mode, you still need to sit in front of the computer, hands on the keyboard, and AI just helps you with some small tasks."

However, starting with GPT-5.2 Codex last December, everything changed. "We suddenly switched to a completely new mode: I can directly delegate the entire task to it. I will制定 a plan with it, confirm the specification it will execute, and then I let it do its thing."

This shift in work方式 is disruptive. "This is a completely different way of working. So we developed the Codex app released last week to create a user experience that is more ergonomic for task delegation."

He provided some internal data: "Most people I know basically no longer open editors anymore. The vast majority of code is written by AI. Humans now might just define interfaces between modules or collaborate with AI to制定计划, but the code itself is no longer written by humans."

7. The End of IDE?

Since engineers no longer write code by hand, the position of traditional integrated development environments (IDEs) has also become precarious.

"If by IDE you mean a very powerful editor, then we specifically did not build editing functionality into the Codex app because we want its usage to be very clear." Embiricos explained. The new work focus is no longer editing code, but managing and reviewing.

He emphasized the following two skills that are becoming increasingly important:

Plan Review: "In delegation mode, the specification or plan of what you want to do becomes more important than ever." he introduced. Codex now has a prominent 'Plan Mode', where the agent first提出 a detailed execution plan and asks the user questions to confirm key decisions, just like a new employee submitting a design document to the team before starting work.

Automated Code Review: To solve the problem that AI might produce大量低质量代码 (so-called 'AI Slop'), OpenAI internally formed a convention: let Codex review its own generated code. "Codex does this very well; we specifically trained the model to be good at code review, and it can provide feedback with extremely high信噪比." Today, at OpenAI, almost all code is automatically reviewed by Codex upon submission.

"An interesting phenomenon," he added, "is that people sometimes let Codex review code generated by another model to experience the power of our model. After seeing it, they usually think, 'Well, I should just use Codex to write code.' This is a dig at Claude Code."

8. Building an Open Moat: The Paradox of agents.md and Open Competition

In an era where AI tools are emerging endlessly and users can easily switch, how to build a 'moat' for the product? Embiricos admitted that OpenAI has adopted a counterintuitive strategy: actively embracing openness.

He explained that current coding tasks are mostly closed or snippet-based, meaning users can easily switch between different tools. However, the real stickiness will come from the future. "When agents start interacting with other systems like Sentry, Google Docs, etc., they will become stickier. Because having an enterprise trust and authorize an agent to connect to these systems is a sticky decision."

Nonetheless, OpenAI's core strategy remains open. "Our Codex core framework is open source, and we continuously work to make it easier for users to switch to other tools." He gave an example: OpenAI created a file convention called agents.md for storing instructions for agents. "We didn't call it codex.md because we want all agents to use it. Now, except for one old acquaintance (referring to Claude), almost all agents have adopted it."

This strategy is hard to understand for traditional venture capital logic. "This is too hard for me to understand as a venture capitalist." the host interjected.

"I completely understand," Embiricos responded. "OpenAI is a very interesting and unusual workplace. From our perspective, our job is 'Distribution of Intelligence'."

He explained the logic behind this seemingly self-sabotaging behavior: "We put all our energy into training these models, and then we provide the models to our competitors. Because we are playing a very long-term game; if competition becomes more intense, we can also learn from it, which is actually helpful to us."

Nonetheless, he remains confident: "We have huge distribution advantages (ChatGPT), model capability advantages, and priority access to new models. We are fighting for victory, but we are also playing a longer-term game."

9. Who is the Gravity Center of Work?

Embiricos believes that the future market landscape will not be a hundred flowers blooming, but dominated by a few giants. He shared a profound story about the 'gravity center' from his personal experience working at Dropbox.

"I worked at Dropbox, and before Slack rose, we wondered whether users should comment directly on documents inside Dropbox or discuss documents in Slack?" he recalled. From an optimal perspective, commenting on a specific timestamp in a video or next to a specific paragraph in a document is clearly more efficient. However, what we saw was that Slack became the absolute 'gravity center' for people's communication. No one wants to comment in documents; I just want to Slack you. Even though it's less efficient, the huge gravity pulled all interactions toward Slack."

He predicts that a similar story will happen in the AI agent field. "I don't think a company will need 12 different agents for employees to figure out which one to talk to. They will never develop usage habits that way." he asserted. "In the future, there will be a super assistant you can talk to about anything. It will become the gravity center of work, people will share best practices around it, hold hackathons. Ultimately, only a few such platforms will remain in the market."

10. Advice for Investors: SaaS Is Not Dead, But Relationships and Records Are the New Moat

Regarding the narrative that 'SaaS is dead, the model layer will eat everything', Embiricos presented his own framework.

"My question is: Does this SaaS company have a relationship with end users? If the answer is yes, I don't think it will disappear." he further added, "Or, does this SaaS company control an extremely important record system? Then it probably won't disappear either."

He believes that in both cases, the value of SaaS companies is even more important than before. But he also issued a warning: "On the other hand, if this SaaS company is just a glue layer, neither owning user relationships nor controlling core record systems, then I would be more concerned about it."

He also pointed out that the requirements for founders have changed in the AI era. "There was a period when you could get investment as long as you made a good product, ignoring customers, market, or distribution strategies. I think that was an abnormal period. Now, building a good product is relatively easy; you must return to the basics and invest in founders who truly think about distribution, have deep domain knowledge, and know what to build for specific customers."

11. Quick Q&A

What is the biggest idea you changed your mind about in the past 12 months?

"I used to think we would interact with AI through multimodality (video, audio). I was completely wrong. I realized that having agents operate computers via code is the correct path right now. This completely changed my thinking on how to bring AI to ordinary people."

What was the hardest product decision in Codex?

"The most painful decision was to cancel unlimited usage for Codex Cloud. We knew the longer we waited, the harder it would be to take back, but at the time we were focused on other more marketable products. When we finally set reasonable usage limits, although only a very small number of users opposed it, their negative voices spread widely on social media. The harsh lesson I learned is: you cannot let anything be free unlimited for too long."

Looking back from 5 years from now, which practices in engineering or product will seem ridiculous?

"First, manually editing code. Second, perhaps more controversial, manually managing deployment and monitoring. I think many startups will be born on a brand new, completely AI-managed tech stack. In the future, the way you start a company might be to first hire an agent, let it build, and then you add co-founders into this platform that collaborates with agents."

Who should provide the guardrails for agents?

"Both will exist. We are investing huge efforts to build guardrails; for example, we are the only company that cares about OS-level sandboxes for coding agents and are open-sourcing our solution built for Windows. But the解决方案 we provide will not be one-size-fits-all; there will definitely be third parties providing customized guardrails for specific enterprise needs."

Reference: https://www.youtube.com/watch?v=S1rQngjpUdI


分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.