Image Source: YouTube
Highlights:
This isn't just about "using AI to assist with coding" – it's a quantum leap. We're moving from augmenting individual productivity to achieving true end-to-end construction and delivery.
Many developers get stuck in the transition from first encountering this new technology to becoming truly efficient. They endlessly "super-optimize" their environments, creating an illusion of increased productivity without actually achieving it.
What truly needs optimization is the entire codebase, making it more suitable for collaboration and continuous evolution. The goal is to optimize the code foundation so that agents can perform at their best within it.
Approach it with a playful mindset. If you have even a shred of hands-on ability, start building what you've always wanted to create. If there's a project lingering in your mind, just go play with it.
"Builders Unscripted" is an official hardcore talk show by OpenAI focusing on top-tier developers. In the premiere episode on February 25, 2026, Peter Steinberger delved into OpenClaw, his journey in the open-source world, and how he leverages Codex to build.
Romain: Peter, welcome to OpenAI.
Peter: Thanks for having me.
Romain: We've known each other online for years, and I'm genuinely thrilled to finally have more time together here in Versa.
Peter: Likewise. Your office is really beautiful.
Community Frenzy: From Online Buzz to a Global Offline Phenomenon
Romain: Thank you. It's been absolutely hectic these past few weeks. We actually thought about recording a video a month ago, which would have required a proper introduction. Now, hardly any preamble is needed. It's rare for an open-source project to make it to The Wall Street Journal; such an achievement is certainly worth congratulating. How does it feel right now?
Peter: I'm a bit apologetic; I feel somewhat overwhelmed by information from all sides. But honestly, when I started tinkering with AI at the beginning of this year, my goal was to inspire more people to get involved. Reaching this point feels like achieving some kind of "ultimate form," so I'm quite proud inside.
Romain: It has indeed been an exciting period. I've been in San Francisco all week attending events like the Codex Hackathon and also hosting an event specifically centered around OpenClaw.
Peter: Actually, this whole thing was driven by the community. Someone suggested holding an offline meetup, so we created a Discord channel just to discuss it. I didn't expect nearly a thousand people to show up. The creativity was shocking – the design, the atmosphere, the colors, and the sheer variety of ideas and projects. You could feel so many people pouring their passion into it.
Romain: That was the moment I truly realized we had created something "magical." A few weeks ago, this project didn't even exist; now, thousands are using it, supporting it, and even gathering in person in San Francisco. That transformation itself is incredible.
Peter: Next week in Vienna, over 300 people have already registered. Compared to mature, active tech hubs like San Francisco, the local tech scene there isn't nearly as large, yet it still gathers this kind of heat, which is truly astonishing.
Romain: It has now gone global, becoming a phenomenon.
Peter: Yes, what's amazing is that it reaches different continents and cultures.
Romain: Indeed. So, how has the overall interaction with the community been these past few days? You've spent a lot of time engaging with members and talking deeply with maintainers who joined later. How was the experience this past week?
Peter: It's been a very special experience. Many people love the project, but quite a few immediately expect to see a mature, perfect "final version." For me, for a long time, it was more like a personal sandbox. All year, I've been constantly amazed by the possibilities AI presents. For developers, it's truly "being born at the right time."
The Golden Age for Developers: AI Reconstructing Development and Identity
Romain: What do you find most interesting about being a builder at this moment? It is indeed a very special time – the entire toolchain is changing, the definition of a "developer" is being reshaped, and almost anyone can build anything.
Peter: Every time I started playing with this new technology, I felt a dopamine surge. I was experimenting with Claude Code; whenever it got something right – maybe only 30% or 40% of the time – it was shocking to me. I suddenly realized, "I can actually build anything now." Usually, software development is time-consuming and complex, and software itself remains hard. But now, the development speed is so much faster.
Romain: I agree. If we look back a few years, around 2011 or 2012, when I first encountered your work, it was during your development of PSPDFKit. From the outside, that journey looked fascinating – like every developer's dream: encountering a problem, providing an excellent solution, building a company around it, scaling it, and eventually selling it. But I believe that journey couldn't have been as easy as it looked on the surface.
Peter: I mean, I didn't wake up one day and decide to build a PDF framework – that was almost at the bottom of my interest list. Things just happened naturally, like a strange butterfly effect: starting from my days developing at Nokia, to friends having needs, to my US visa being delayed for too long. This series of 偶然 factors ultimately led to my decision to start a company.
Romain: What I find interesting is that after establishing that company, you seemed to take a break for a while. What factors prompted you to return to this field?
Peter: Ultimately, I was completely exhausted. Working at high intensity for 13 consecutive years is tough; running a company is hard, and being a founder is even harder. Since it was my first startup, I didn't really know how to mitigate these pressures, so for a while, I was nearly burnt out and needed to relax.
Even so, I kept an eye on tech news. I saw GPT Engineer (or the early version called ChatGPT) and thought it was cool, but it didn't immediately excite me. You have to experience new technology firsthand; you can't truly feel its power just by reading. Although the technology at the time didn't immediately resonate with me or make me start building, when I was ready and felt, "Okay, I want to create something again," I no longer wanted to build projects in the traditional tech space because I had done that for so long, and the world had moved forward a bit.
At that time, many things needed to be rebuilt, and I happened to have the motivation and opportunity to return. However, when you are already an expert in one field and switch to another, the difficulty is beyond "hard"; it's more like pain. You possess extensive knowledge of building projects, but without the aid of agentic engineering, migrating those capabilities still requires learning a lot. I thought, "Might as well look into this AI thing." The moment that truly shocked me was when I took a half-finished project to try it out – a project that had been shelved long ago due to exhaustion before completion.
Romain: For us developers, it's often like this: everyone loves coming up with new ideas and starting new projects, but finishing them is the hardest part.
Peter: I see this often – finishing a project is really hard, and sometimes they fail. But I wanted to push this project forward and also rewrite it. So, I consolidated the entire project into a massive Markdown file, about 1.5MB, dragged all the files into Gemini Studio 2.5 at the time, and had it generate a spec. I organized the few lines it generated, dragged them back into Cloud Code, executed the build, and did other things on my main screen while the side screen ran for hours.
Things were much harder back then. Once, it told me it was "100% ready for production" – probably an advanced version like Claude 3.5 Opus. I tried it, and it crashed immediately. So, I integrated Playwright – one of the few MCPs I actually knew how to use – and had it build the login flow while checking operations along the way, for example, to test Twitter. An hour later, it actually succeeded and showed me some results.
Romain: Many people see OpenClaw as your overnight success, but I think the most interesting part is that it's actually the accumulation of many projects you've worked on over the past 9 to 10 months. Looking at your GitHub profile, you've built over 40 projects.
Peter: About half of those projects were directly applied to this one.
Romain: I think you integrated many of them into OpenClaw. Can you talk about this journey and how these ideas and projects eventually converged into OpenClaw?
Peter: I wish I could say there was a unified plan from the start, but most of it was exploratory. I wanted certain features or tools that didn't exist, so I "created" them – or rather, I prompted them into existence. Why? Because I wanted to build it, step by step. I wanted my agent to do things for me, but I didn't have a complete overall vision yet. Interestingly, things eventually came full circle. For instance, I wanted it to check my WhatsApp, so I went ahead and implemented it, even registering the relevant domain.
I made a prototype but thought, "Well, big companies and labs will do this sooner or later, so I'll wait." So I focused on other things and ran many experiments. The goal during that period was mostly for fun and to inspire others. By November, I had made several versions, but none were ideal. I wondered, "Why hasn't any lab made these things yet? What are they busy with?" So I did it myself, creating the first version, which later became OpenClaw. It was already the fifth version, but I still wasn't fully impressed. It felt – well, cool, it only took about an hour to scaffold the first prototype. You just prompt, and various things "generate."
What truly impressed me was during a weekend trip to Marrakech. I found myself using it much more frequently because it was so convenient. You know, the local network wasn't stable, but WhatsApp works everywhere. This convenience impressed me. I used it for many things, like translating images, finding restaurants, and even checking data on my computer. It felt super cool. I showed it to friends and even had it send messages for me; they all wanted this feature. I thought – you guys shouldn't be using it; you don't understand its power yet.
Romain: These are the only signals of product-market fit. If even your friends want what you're making, even if you never designed it for them, it proves the value. Previously, such tools were mostly reserved for technical peers.
Peter: What made me truly feel its value was during frequent use. Once, I tried sending a voice message, and I suddenly realized – this shouldn't be possible.
Shock Moment: AI Agents Autonomously Solving Un预设 Complex Problems
Romain: Tell me more about this story; I remember we chatted about it a few days ago.
Peter: It's really fascinating; it showed me the powerful problem-solving capabilities of these models. We develop these tools for agentic engineering, but the core skill is actually more abstract: if you want to be a good programmer, you must first be an excellent problem solver. And this capability can be mapped to any field.
I sent this voice message, and a typing indicator appeared on the screen. I was very curious about what would happen next – I hadn't written that part of the code, so theoretically, it shouldn't work. But the model replied directly. I was shocked: How did it do that? As a model, it shouldn't be able to run. Here's how the model handled it: It said, "You sent me a message, but it's just a file without an extension." So it checked the file header and discovered it was an audio encoding format. Then it converted the file on my computer using ffmpeg. I intended to transcribe it to text, but Whisper wasn't installed on my computer. So it searched around, found an OpenAI API key, used curl to send the file to OpenAI, and finally got the text back – just like that, I got the result.
Romain: That is indeed incredible. This is exactly the power demonstrated after giving these agents tools and full computer access. They can combine resources on their own, design solution paths, and figure out how to solve problems even if not a single line of code was written for such specific scenarios – this feeling is both shocking and somewhat amusing.
Peter: When I told this story, someone exclaimed: "Oh my god, it actually used my key, that's crazy!" But it's not like that. I put the OpenAI API key in an environment variable specifically for this purpose. If a script is supposed to access the OpenAI key, and my bot runs in the same environment, then of course it can access that key – I put it there for it to use. This isn't bad; it's exactly the effect I wanted. That was a highlight moment for me. Afterwards, every time I showed it to friends or pulled it into a small group chat to test – frankly, this thing was designed for one-on-one communication. If putting it into a group chat, it's best to choose someone you trust to try it with.
Romain: Someone you truly trust.
Peter: Because it wasn't designed to "just be thrown into a public scenario and automatically run correctly." Essentially, it's a personal assistant, built around personal use cases.
Romain: When I set it up, I was also quite curious – this configuration method is a bit strange, but I really wanted to see where it would end up. Later, there were indeed a few "aha moments": the more access permissions you give it, the richer the tools and skills provided, the more amazing the capabilities it demonstrates. To some extent, it's like endowing it with "virtual skills," where capabilities amplify as resources are opened up. When you ask it to build a website or app for an event, it's doing more than just generating code. It will call your OpenAI API key, integrate AI functions directly, automatically deploy, and even generate a shareable link. This isn't just "using AI to assist with coding"; it's a leap – from augmenting individual productivity to true end-to-end construction and delivery.
Peter: Throughout November and December, I was almost completely immersed in this. Although I did some other projects, most of my time was invested here. But on Twitter, people didn't seem to truly understand it; the feedback I got was quite cold, and the reaction wasn't strong. Yet, every time I demonstrated it to friends, they wanted to use it immediately. I kept saying, "No, no, it's not ready yet." Later, I thought, why not do something crazy to let people truly see how cool it is.
So I built a Discord server and put my bot directly into it with almost no security measures – at that time, there wasn't even sandboxing; everything was very early. I was basically developing in a completely open environment, equivalent to using OpenClaw to build and debug OpenClaw itself. At the time, the model would say, "Do you see this tool?" I'd reply, "No, I see nothing." It would say, "Then check your own source code." Then it would guide me to look elsewhere. All this happened in an open environment; everyone saw how it self-troubled and self-corrected. It was at that moment that people began to truly understand its significance.
Romain: When you put it into Discord, what specific access permissions did you give it? For instance, did you let it read all your tweets? To what extent did it grasp your information?
Peter: Not all tweets; there were too many, but it did include a lot of my memory data. I was actually monitoring it quickly because prompt injection is still not fully solved. But the performance of the new generation of models is indeed excellent. I have a "canary," my custom MGE, which defines my values – how I want the model to operate, how to think, and what is important to me. People were very interested in this; even strangers came in, trying to paste large amounts of code via prompt injection. But the model responded directly: "I don't look at this." Basically, it was "mocking" them. Even so, I still didn't have much confidence. After the first night attracted a lot of attention, I turned it off and went to sleep – slept for ten hours, then continued after waking up.
That day on Discord, there were about 800 messages, and my agent was replying to each one. I panicked completely and turned it off again. Later, I carefully checked every message and slowly calmed down because it actually didn't perform any malicious operations and couldn't access my MGE data. It's not that prompt rejection is impossible, but it's not as easy to bypass as people imagine.
Romain: Right? Overall, it was actually operating as expected.
Peter: Yes, my biggest mistake was disabling it but forgetting I actually had a launch daemon. What does a launch daemon mainly do? If the service crashes or MGE (sol) is terminated, it automatically restarts because you want the service to be reliable. Apple originally designed it to keep services stable. I didn't account for this, so I "killed" it, and it restarted itself within five seconds while I went to sleep. Now I've learned my lesson and added sandboxing. When he saw it in Gemini Studio, he was very proud, calling it a "castle." I put it into a small container, but these models are really very creative.
For example, the first time I made an Alpine Docker container, which had almost nothing inside. I said to Malte, "Hey, can you help me check this website?" It said, "There's not even curl here; I can't do anything." I told it, "Use your creativity." It actually built a curl using a TCP socket, compiled it with a C compiler, generated a rudimentary version of curl, and could actually access the website – it worked perfectly, absolutely crazy. The resource scheduling ability and creativity of these models are truly incredible.
Romain: Of course, there were some challenges. Many people examine projects from a security perspective, expecting them to be very perfect and robust from day one. But at the time, it was just releasing an open-source project, still in the early exploration stage.
Peter: Whenever someone asks me, "Can you help me contact your CEO, HR, or other team members?" I can't help but laugh. It's just me coding in a "cave." But this precisely highlights that cognitive dissonance – from the outside, it looks like the work of a mature company; in reality, without these models and agents, a single person could never achieve such scale and complexity. Now there are maintainers joining and PRs coming in. But essentially, this project was initially completed by me alone – and a year ago, this was almost impossible. At that time, there simply weren't models with such capabilities allowing one person to build a system of this scale and complexity. So from a traditional perspective, it shouldn't even be considered "something one person can complete."
Romain: Let's talk about this topic. I think many developers are curious – how is Peter's productivity achieved? I checked your GitHub again this morning; in the past year, you accumulated over 90,000 contributions in more than 120 projects. What's more interesting is that the GitHub activity graph was almost blank, light green at the beginning of the year, but by autumn, especially in October and November, it suddenly turned dark green. What happened during that period?
Peter: Every generation of models is improving. But the change isn't just that agents are becoming stronger; the "intelligence ceiling" of the models themselves is also rising. At the same time, my understanding of how to harness them and my own workflow are constantly optimizing. Some people still write code in the old way, thinking that method won't change; when they try using AI, they call this approach "vibe coding."
In my opinion, this term itself carries a bit of a derogatory connotation. They try AI without realizing it's actually a skill. Like picking up a guitar, you can't play well on the first day. So, because the experience wasn't good, they conclude: "No, this doesn't work." If you approach it with a more playful, exploratory mindset, you must be willing to learn. Now I have an intuition about what effects different prompts will produce and roughly how long they will take. If the process becomes abnormally long, I reflect – did I write the prompt wrong? Is the architecture incorrect? Is the train of thought problematic? This is like writing code. When you are coding, you also have a feeling: is a function naturally integrating into the overall architecture, or is it "fighting the system," feeling awkward everywhere. This kind of judgment takes time to accumulate.
Escaping the "Agentic Trap": Keep it Simple, Focus on the Problem Itself
Romain: So, if someone wants to reach a similar efficiency level as yours, what is your current coding setup? You mentioned earlier that many people make their development environments too complex.
Peter: Actually, I did this myself too; I call this situation the "agentic trap." In the process from first encountering this new technology to becoming truly efficient, many people get stuck here – endlessly "super-optimizing" their environment. But this optimization often doesn't truly improve productivity; it just creates an illusion that "I am more efficient." It looks busy and high-end, but the actual output isn't necessarily greater.
I wrote a blog post that was quite controversial at the time. I just said you have to treat it like a conversation. The model is more like communicating with you – this isn't entirely traditional pair programming, but another form, more like ISS, essentially a continuous dialogue. I basically just tell it what I want. I always ask the model one question: "Do you have any questions?" Because by default, it will try to solve the problem directly and make various assumptions on its own. And these default assumptions aren't always optimal – especially remember, its training data contains a lot of code, including much early or even obsolete code. Therefore, by asking back and letting it clarify the problem first, you often get better results.
"Do you have any questions?" is actually a very key question. Models usually start in a "blank state"; they don't accumulate context gradually like we do. For every new session, it's "I know nothing about this codebase" for them. They can only search and locate relevant snippets based on the current conversation and then try to solve that specific problem you raised. But they usually can't see the complete picture. To do this well, the complete picture must first form in your own mind, and you also need to slightly guide the model, telling it to look here, then look there. Codex is stronger in this aspect; it's better at doing an overall browse first before diving into details. I use a very basic method. I don't even use worktrees; I simply do 1 to 10 checkouts.
Keeping it simple allows me to focus more on the real problem itself. I basically don't mess with complex branching strategies but focus on different problem modules. Ideally, when the project scale is slightly larger, this method is even easier – you can advance in parallel on different parts that don't conflict with each other without them "fighting."
Romain: You used Codex extensively while building OpenCloud. Besides that, in what other ways has it changed your working method?
Peter: I've tried many tools. But among all current tools, I trust Codex the most – it has a very high success rate in building the results I want, and situations where it "runs directly" are becoming more common. Many people don't realize that GPT-5.2 brought another qualitative leap, almost a "quantum-level" jump. That feeling of "it really just works normally" is very obvious. To this day, I'm still surprised by the stability it has achieved.
Romain: This is really great – being able to directly build various things is itself incredible.
Peter: Yes, I think everyone should really try it themselves.
Romain: You mentioned earlier that you now even release code that you haven't read line by line. How did this practice change?
Peter: Most code is actually quite "boring." It's just converting one data structure to another, finally presenting it to the user, or passing it to the next system. Therefore, for the vast majority of code generated by the model, I roughly know what's going on. I just need to glance at the output stream to confirm that the content generated大体 conforms to the mental model in my mind – that is, what it "should" look like – and that's basically enough. I previously led a team with quite a few software engineers. This also means accepting a fact: the code they eventually write won't completely match the ideal way I imagine.
What truly needs optimization is the entire codebase, making it more suitable for collaboration and continuous evolution. It's the same now – what needs optimizing is the code foundation so that the agent can perform at its best within it. And this doesn't necessarily equate to the way that is "most comfortable for humans to write." This also means accepting that the generated code might not be exactly my ideal style. You can indeed guide the model towards a certain style via prompts, but many problems themselves have multiple structural approaches; most of the time, there isn't a single correct implementation. If performance issues really arise later, optimize that part then. The key is to get the system running first, then refine it finely when needed.
Romain: The view on "code value" mentioned earlier is actually changing how I view open source. Taking Open Cloud as an example, there are currently about two thousand PRs open. In the past, without AI, every PR needed careful reading because the code itself was the core value. But now, I'm more willing to understand it as a "prompt request" rather than just a pull request. What's truly important is often not that specific piece of implementation code, but the idea, intent, and direction behind the PR. The code can be rewritten, refactored, or even regenerated by the model.
Peter: Sometimes processing a PR takes longer than doing it myself. Because my trust in the model being "non-malicious" is often higher than that of an external contributor I've never heard of or communicated with before, I must review such PRs more carefully. But when I see a PR and start the review, the first thing I ask the model is: "Do you understand the intent of this PR?" Because what I really care about is not the code itself, but what problem this person is actually trying to solve. Many times, it's more like an issue plus a large-scale attempt at a solution. Firstly, many people still don't know how to truly yield to the agent.
Secondly, they often only provide a very local solution because they don't have the full picture of the entire system in their minds. The difficulty lies in how this small new function embeds into the entire larger system? Or this small fix – it is indeed just a small fix – is it really correct? Might the problem actually lean more towards a certain module, or even be a more systematic, architectural issue? If you just converse with the model, it's actually very good at handling this situation. When I say "Okay, implement this now," the model will start building. But before that, I will ask it: What is the intent of this change? Is this the optimal solution?
Sometimes it will say yes, but more often it will say no. Then we will start exploring together what the more appropriate fix is. Is this an architectural issue? For example, if this is a message processing issue, does it really only affect WhatsApp, or could it also affect Signal? If so, shouldn't we solve it in a more general way rather than just making a local patch? Is this a new feature? Do we really need this new feature? Sometimes, such discussions last ten to fifteen minutes. I usually use voice because the feeling is really like communicating with a very smart colleague.
Romain: Inputting tokens via voice is actually easier than typing.
Peter: After I confirm the direction is fine, I trigger a slash command – for example, LPR – which will clarify the entire process: create a branch, complete all modifications, and then merge the PR. I hope to build a community, so even if the whole process takes longer than writing it from scratch myself, I will still retain the original author's credit. Because I cherish everyone's ability to participate.
Romain: Looking to the future, with more and more contributors participating in this project, where will OpenClaw go next? Also, do you see yourself as a kind of "pioneer" – providing a paradigm for what a "personal AI agent" should look like, so that when billions of people might use similar systems in the future, there is a direction to refer to?
Peter: Yes, I hope to find a balance between the two: on one hand, it should be simple enough for even my mother to install; on the other hand, it must remain interesting and hackable, which is itself difficult. Most open-source projects are used by downloading a package and installing it directly. But for a long time, my default installation method was git clone, build, run. In this way, the source code is directly on the local disk, the agent "sits" within the source code, and is aware of this source code.
If there is anything you don't like, you just need to give the agent a prompt, and it will modify itself – in a sense, it is truly "self-modifying software." Precisely because of this, many people who never submitted PRs to me before are now submitting PRs. This is also why I prefer to call it a "prompt request" – the key is not just the code itself, but the understanding of "how to build software capable of long-term evolution." At the same time, the entire security industry is watching it. This is interesting but also somewhat frustrating because some new things are indeed ignored. For example, the web server I made was originally written just for debugging, and later the interface was made better looking. Its original design premise was to be accessed only in a local network, that is, a trusted network environment. But because I also hope it can become a kind of "hacker paradise," I did provide an option to modify the access method yourself. After all, some people's deployment environments are very special, such as using certain specific tools or forwarding via reverse proxy.
So there was a reason I didn't make it a strong restriction mode initially. But now someone exposed it directly on the open internet, although I repeatedly emphasized in a document "do not do this" because that was never its design intention. Subsequently, people from the security industry pointed out: it has no login restrictions, nor those security mechanisms necessary for running services on public networks. The problem is, that was never the usage scenario it was designed for. Indeed, it wasn't designed for that purpose initially. But because it is configurable, it was directly categorized as a CVSS level 10 issue. So there is indeed some struggle in this matter. Later, I also introduced a security expert to make security a core focus. Because I have realized that I cannot stop others from using it in unexpected ways. Now, what's more important is to support these different use cases while trying to avoid users "hurting themselves."
Romain: This is exactly the charm of open source – people can embrace it and come up with ideas I hadn't even thought of.
Peter: Yes, this is both its charm and its craziness.
Romain: Stepping slightly outside the topic of OpenClaw. This week, I chatted with many developers about your upcoming participation in the Codex Hackathon, and they are all curious: How does Peter come up with so many good ideas? Where do these creativities come from? I don't know if there is a clear answer, or if this is more just out of personal curiosity, constantly exploring by following one's own interests.
Peter: It's more like a realization: many things have become very easy now. Even if there is already an open-source project that solves 70% of my problem, I will choose to do it myself – and this was almost impossible a year ago. The current state is that I just need to give a prompt, put it on the second screen, let Codex run, and it starts working.
Romain: We both come from Europe. When I left San Francisco and returned to Europe, I believe you have the same feeling: many developers and engineers haven't truly started using Codex and agentic tools. For them, what is your advice? Should they rethink their working methods and workflows when getting started?
Peter: My first piece of advice is always: approach it with a playful mindset. If you have at least a little hands-on ability, go build what you've always wanted to build – if there's a project lingering in your mind, just go play with it. You must face this thing with a playful mindset. I remember the CEO of Nvidia also said: "In the short term, you won't be replaced by AI, but by people who know how to use AI."
Romain: People who use it better.
Peter: But if your identity is: I want to create things, I want to solve problems – if you have high autonomy, if you are smart enough, your demand will be higher than ever before.
Romain: Now is the perfect time for creators to embrace these tools, guide their curiosity, and truly put any idea into practice, just as you have done with these wonderful projects and OpenClaw.
Peter: I think within a year, this will explode completely.
Romain: Yes, 2026 will be very interesting. I think this is a great way to end. Thank you very much, Peter, for taking the time for this interview. It was wonderful spending time with you. We all love your work at OpenAI and are very willing to support developers like you; frankly, you are a true inspiration to the entire developer community. Thanks again, and we can't wait to see what you do next.
Original Video: Builders Unscripted: Ep. 1 - Peter Steinberger, Creator of OpenClaw
https://www.youtube.com/watch?v=9jgcT0Fqt7U
Compilation: Lureen Zheng
Please note, this article is compiled from the original link stated at the end and does not represent the position of Z Potentials. If you have any thoughts or insights on this article, welcome to leave a message in the comment area to interact and discuss.