Claude Code and Cursor May Face a Major Rewrite, But OpenCode Could Be the Exception

ImageImage

Compiled by Yuqi and Tina

Development tools of the previous era were meticulously polished step-by-step: stable in behavior, restrained in interaction, and any issues that arose were mostly within expectation. However, today's software products like Claude Code and Codex have adopted "using AI to write itself" as the default path. While AI has indeed accelerated coding, it hasn't automatically solved the challenge of "how to maintain complex software effectively over the long term."

Claude Code is a typical example. Anthropic's tool was built almost from scratch, yet the team has long adhered to a highly aggressive internal approach: they emphasize that "100% of Claude Code's code must be written by Claude Code itself." Internal engineers and researchers rely on Claude Code for all tasks, from large-scale code refactoring and squash commits to various trivial coding work.

The problem is that when the underlying model itself is non-deterministic, and the product code carrying these capabilities is rapidly stacked up under such a development methodology, the system easily falls into a vicious cycle. Over the past year or two, Claude Code has been rapidly expanding its capabilities, with increasingly complex interaction logic, making the product itself more and more unstable: various crashes, strange errors, more and more bugs, and slower speeds.

Things have even developed into a rather absurd state—rather than systematically fixing these performance issues, it's better to directly acquire the core dependency Bun and pin hopes on the underlying runtime team. In other words, buying an entire runtime team just to prevent their CLI tool from occasionally consuming 2GB of memory.

Cursor's situation is another kind of complexity. It initially faced an extremely large, extremely complex codebase that it didn't shape from scratch—they directly forked VS Code. This starting point determined that from day one, they were fighting a high-difficulty battle: you're not building a product on a blank sheet of paper, but making incremental improvements on a massive engineering system; you not only need to continuously develop your own differentiated capabilities but also maintain this forked version long-term and keep it synchronized with necessary upstream updates as much as possible. Anyone who has worked on large-scale engineering knows this is inherently extremely painful, and as time passes, the fork only deepens, and maintenance costs only increase.

Looking at these phenomena together, an increasingly clear trend emerges: AI programming tools will likely experience a "wave of large-scale rewrites."

This is because their codebases themselves have been driven into an increasingly irreversible state during early rapid iteration. Continuing to stack features on top will only make the system more fragile; the real solution often lies in admitting the old foundation is out of control and building a new one from scratch.

But this doesn't mean all teams will reach this point.

OpenCode provides an interesting contrast. It's also a tool built during the AI programming wave, but the team adopted a completely different strategy: they emphasize codebase consistency and constraints more than ever, striving to ensure no files deviate from established specifications; simultaneously, they extensively adopt tools and frameworks with stronger constraints and clearer design philosophies, and more firmly practice Domain-Driven Design.

They believe that with large models participating in development, once a codebase becomes "dirty," the consequences are amplified. Large language models cannot distinguish between "old patterns" and "new patterns"; they will treat old writing styles as correct examples and continue generating code that doesn't conform to current specifications. Therefore, the negative impact of an unclean codebase is even more severe than in the past.

Thus, a somewhat counterintuitive result has emerged: their codebase is cleaner than ever before, "possibly even the highest quality code we've ever written," as Dax Raad, one of OpenCode's founders, said in a podcast.

At the same time, he hasn't given up on "hand-writing code" itself. "When I'm designing new features or complex architectures, writing code itself is the thinking process. I'm not good at writing long, detailed specifications; instead, I like to write type definitions, try function combinations, adjust file structures, and understand problems through writing code. This is how most programmers have worked for a long time. I don't see any reason to give up this approach—writing code is how I think."

Additionally, he slightly criticized Claude Code from a code quality perspective: "Claude Code created a prototype with extremely high product-market fit; it would succeed even with an imperfect experience. But this doesn't mean everyone must sacrifice quality to achieve that speed."

Based on this podcast video, InfoQ made partial edits.

The Origins of OpenCode

Host: Since last December, OpenCode's development has been very rapid. Could you take us back to how OpenCode was born?

Dax: Our company has been doing open source for many years. We've always been building tools for developers and have witnessed the rise and fall of different companies in this field, so we've accumulated a lot of experience in building open-source products.

Our previous project was SST. Although far from OpenCode's current scale, it was quite popular. It gave us complete practical experience: how to start an open-source project, how to make it successful, how to operate it daily, and what the advantages and disadvantages of open source are. You could say we've been deeply cultivating this field for a long time.

Around February 2025, we became profitable. At that time, the company had only three people. After achieving profitability, we began to reflect on what to do next: continue deepening our existing products or explore new directions? AI is obviously an important trend of this era; completely ignoring it would be very irrational.

So we started trying out some ideas, exploring what AI could do for developers and what it could do on a broader level. We tried several directions, but none truly formed a product. Some ideas were very helpful to us personally, but we couldn't polish them into mature products.

Around that time, we started using Claude Code. Actually, before that, we had already seen many AI programming tools, like Cursor, which was already quite popular. But no one on our team really stuck with them. We tried too, but felt like we were giving up things we originally liked without gaining enough benefits, so we didn't persist.

But Claude Code was the first time we felt, "This is the right workflow." Before that, what we had been doing was copying code into ChatGPT, then copying it out, back and forth repeatedly. We kept wondering, why can't these things connect directly to the file system? Why be so manual?

Claude Code is a smart integration in this regard, connecting these processes together. So we thought, if this is the first tool that truly made us "stick with it," then this matter might be very important.

Next, we started thinking: what if we applied our open-source experience to this? There was an obvious gap at the time—there wasn't yet an open-source coding agent. So we wondered, could we make an open-source coding agent that supports multiple models? We knew competition between these models would exist long-term and become increasingly fierce.

This entry point was a very natural extension for us.

Host: What does your daily development workflow look like now? How much has it changed? After all, you're both a developer and someone who builds developer tools, which is a very special perspective.

Dax: Our team members are all Vim users; almost all our work is done in the terminal, and we greatly enjoy the Vim editing experience. Migrating to Cursor would be very costly for us because, although we could still edit code, the text editing experience, in our view, became worse, and the benefits gained weren't enough to compensate for this loss.

The reason Claude Code is "stickable" is that we can continue using our original editors while doing AI-related things in a separate space, with the two not interfering with each other. This is very important to us.

I think Cursor is more of a transitional product; it tries to bring you directly from traditional editors to the new paradigm of AI programming. This certainly has some benefits, but for me and many people, it's in a somewhat awkward middle state—I actually just want to use the editor to write code, not have various AI features popping up everywhere.

When using Cursor, I feel suggestions are everywhere, plus a bunch of new UI panels, which makes me very uncomfortable. I prefer to treat the agent as a "clumsy colleague sitting next to me": I occasionally glance at what it's doing, give some feedback, then let it continue, while I can do other things; work can be separated.

So Claude Code's biggest advantage is actually that it provides an independent space outside the editor. When we were building OpenCode, we were continuing in this direction: in this "independent space," how richly and well can we make the interaction with the agent?

My workflow now is still: use Neovim to edit code, use the Agent to handle tasks that need an Agent. We are indeed using Agents more and more, spending relatively less time in the editor, but I'm far from completely giving up hand-writing code. I still use the editor extensively and write code manually.

No Longer Writing Code?

Host: Now many top developers claim they no longer write any code from scratch; many people hearing this understand it as "programming is dead." What's your take?

Dax: I'm confused by this statement. If you ask me what proportion of my code is hand-written, I actually find it hard to answer. I switch constantly between different tools; it's hard to quantify.

If someone says they almost never use an editor and work entirely within these agent tools, whether OpenCode, Codex, or other similar tools, I'd be very surprised. Because these tools are actually not suitable for reading code. So do they do no code review at all? Or push code to GitHub and then look at it?

Additionally, when I'm designing new features or doing something more complex, writing code itself is part of my thinking. If it's just adding a button or making a simple change, then of course you can just prompt it, maybe not even looking much at the generated code because it's probably similar to the surrounding code.

But when I'm doing something entirely new, or designing a system, I need to write code to figure out exactly how to do it. I find it hard to sit there and write a long, detailed Spec, then let AI implement it. I'm more accustomed to writing type definitions, trying different function combinations, adjusting file structures, and understanding problems through these processes. This is actually how most programmers have always worked.

So I don't see any reason for me to stop doing this, because this is how I figure things out.

So when someone says "I don't write code at all anymore," I'm a bit skeptical. I think there's a psychological factor here: everyone feels a huge change is happening and worries about being left behind, so they tend to convince themselves "I'm already at the forefront."

Plus, there's now a narrative saying this change will eliminate many people, leaving only a few. So everyone has a tendency to amplify certain local successes into "everything can be done this way now," which is a bit exaggerated.

So it's hard to judge the real situation from these statements because a lot of emotions and psychological factors are mixed in.

Host: I think this point is very good because I don't even think this is "intentional marketing." For example, Boris, one of the early authors of Claude Code, said he almost no longer writes code from scratch, but he recently also said "why Anthropic is still hiring developers," indicating humans still have massive involvement, so it's indeed very confusing.

Dax: I agree; this isn't out of malice but a result of excitement intertwined with anxiety, making it difficult for people to accurately express the real situation. Similar phenomena have been common when new technologies or frameworks emerged in the past; people often claimed they "completely changed the way of working." An effective criterion for judgment is to directly examine the output: in many cases, there's no truly landed product, only attempts; even if products exist, the quality isn't necessarily better, it might even be worse. The same applies to current AI programming practices; some claim to "completely rely on AI to write code," but the output quality isn't ideal, which to some extent reflects the current true level.

Host: OpenCode and Claude Code seem to be in direct competition. What's your view? Especially after Anthropic restricted subscription usage, has your perspective changed?

Dax: I don't think the world is zero-sum; most systems allow for win-win situations for multiple parties. But in the business realm, competition is real. Business is more like a sports match; everyone competes for different visions of the world. It might not be a complete victory for one side, but competition does exist. However, "positioning" is more important. Even if products seem similar, positioning can be completely different.

OpenCode's success comes more from positioning than just product quality. We judge that competition between models will continue, including closed-source and open-source models. Prices will drop, and competition will become fiercer. So we chose: make a tool not bound to a single model so we can benefit from model competition. Secondly, we want to occupy the position of "the number one open-source coding agent." Historical experience shows that most development tools eventually go open source; databases, compilers, and editors are all like this.

Claude Code follows a vertical integration route, which differs from our positioning. From a positioning perspective, we're not necessarily in direct competition. But at the values level, we do have divergences and hope to prove our values will bring better results.

Host: As a user who has used both Open Code and Claude Code, I would definitely say Open Code's experience is very good. To summarize: open source, free switching between different models, no lock-in, and first-mover advantage.

Dax: These are indeed core directions, not just slogans but reflected in many concrete details. For example, why insist on open source? Because open source means more people will try it in different environments. Open Code was designed from the start to adapt to various environments. Even on a heavily restricted enterprise laptop that can only use Amazon Bedrock, it ensures normal operation. The benefit of open source is that although we can't reproduce all environments internally, the community can. Others can test in real environments, submit issues, and even submit fixes, thus covering various long-tail scenarios well. If the product were only half as good as it is now, we might still achieve similar success because success comes more from positioning than purely product quality.

Host: OpenAI has taken a different route at least in terms of cooperation with you. What exactly is your relationship? Why would OpenAI adopt a different approach?

Dax: This actually goes back to our positioning. If we are the open-source option, we have the opportunity to become a "standard," allowing others to build on us or embed us into their systems. Therefore, before cooperating with OpenAI, we were already communicating with GitHub, GitLab, JetBrains, etc., hoping they would recommend Open Code as a way to use their large model services because we invest more in this area and have better user feedback. After persuading some companies, I went to OpenAI, indicating industry support already existed, and asked if they were willing to join.

The reason for choosing OpenAI is that they compete with Anthropic, while Anthropic has a higher mind share in the coding field. For OpenAI, supporting us has PR benefits and can attract more users to Codex. The timing of my contact with them also coincided with Anthropic banning Cloud Max and Open Code, so they saw an opportunity for a counter-layout.

As to whether OpenAI truly agrees with this model or is motivated by short-term competition, I'm not sure. But we are good at insight into various incentives, exerting influence at key nodes to create situations beneficial to ourselves, users, and the open-source community. Essentially, it's about understanding incentive mechanisms and creating better outcomes in the game.

Host: Recently, there have been many acquisition cases in the industry. Will OpenCode be next?

Dax: We spent many years looking for a truly huge market, and now we've found it. There are 30 to 50 million developers globally; our product can theoretically serve everyone. Such opportunities are rare. Therefore, giving it away easily is a difficult decision. We have indeed received many acquisition offers but haven't seriously pursued them. Unless the other party offers a very high price, and there are indeed exaggerated acquisition cases in the AI field.

Once, I mentioned in our team group that a company wanted to acquire us; everyone completely ignored it and continued discussing the product. I reminded them again, and someone said, "Tell them to add a zero before coming back." The team really wants to take things to the end, not cash out quickly.

Of course, a few years later, if growth stalls, my attitude might change. A company can grow big because founders maintain motivation for many years. Many acquisitions happen because founder motivation declines or the road ahead is too long; currently, we want to go to the very end.

Host: AI makes us faster, but does it accumulate more technical debt? Has this trade-off fundamentally changed?

Dax: This trade-off has always existed. Many times, people use "made trade-offs for speed" to explain quality issues. But looking back, most problems weren't intentional trade-offs but lack of experience.

When I did something for the first time, 95% of the problems came from lack of experience, not conscious trade-offs. The next time I do it, I can do better in the same amount of time.

AI is the same; it raises everyone's capability ceiling but shouldn't become an excuse for laziness. We should still reflect and improve, not think there are no problems just because "it runs."

Some people say: "The code is terrible, but we wrote it fast." Actually, more experienced people can write better code at the same speed; this is essentially still a capability issue.

Host: Then from a product and user perspective, positioning and speed might be more important than quality in the short term. For example, was Claude Code's rapid release reasonable at the time? Should they have done things differently later?

Dax: I think everyone moves forward as fast as possible and makes different trade-offs based on experience. In Claude Code's case, they made a prototype with extremely high product-market fit; it would succeed even with an imperfect experience. This situation is common, but this doesn't mean "everyone must sacrifice quality to achieve that speed."

We built Open Code in about the same time, also constructing terminal frameworks, Zig implementations, React and SolidJS bindings, compiling to bun binaries, etc. The reason we could deliver higher quality at a similar speed is that this is our familiar territory. Of course, there must be people who can do better than us. Overall, there will always be people ten times worse than you in this industry, and always people ten times better.

Developers' Choices

Host: When a large amount of code is generated by AI, how do you balance improving efficiency and ensuring quality? For example, in code review, would you submit directly without reading the code?

Dax: A somewhat counterintuitive phenomenon is that I think our codebase is cleaner now than ever before, possibly even the highest quality batch of code we've ever written. The reason is that the negative impact of an unclean codebase is now more severe than in the past.

In the past, a typical codebase lifecycle was like this: initially, we set a pattern; a few months later, we found better practices, so we notified the team to develop in the new way going forward, but old code wouldn't be refactored immediately. Over time, the codebase formed multiple layers of historical legacy styles. This was acceptable before, but not anymore. Because large language models cannot distinguish between "old patterns" and "new patterns"; they will treat old writing styles as correct examples and continue generating code that doesn't conform to current specifications.

Therefore, we value clear and strict enforcement of unified patterns more than ever, ensuring no files in the codebase deviate from specifications. To some extent, we care more about code quality now because we've "hired" a group of diligent but uncomprehending LLMs; they have extremely strong memory but cannot judge which pattern is better on their own. We extensively adopt tools and frameworks with strong constraints and clear design philosophies and more firmly practice Domain-Driven Design.

As for whether to read all code, my approach is based on risk assessment. In modules where patterns are mature and stable, I have a strong sense of expectation for the output, so I usually just do a quick check; in areas where the structure isn't yet stable, I will more carefully confirm line by line. The team largely adopts a similar strategy.

Host: Some people might be shocked to hear you don't review every line of code. But looking back, even in large tech companies, not every line of code is carefully read. Often, as long as trust is established with the developer, the review is also relatively fast. To some extent, you seem to be saying that a certain kind of "trust" can also be established with LLMs.

Dax: I'm still relatively conservative. Even in large companies, at least one person truly understands the code—the one who wrote it. But if AI generates code and no one understands it, that would be unsettling.

I prefer to judge based on a "sense of risk." For example, the last time I reviewed less was when implementing a new dialog box for a terminal interface. I did sufficient testing from a user perspective and confirmed the function works normally. Since the underlying basic components for building dialog boxes are very mature, I judged the risk to be low. Although there might be details flaws in the technical implementation, which were indeed cleaned up later, there were no major issues in the short term. However, I would still fix it as soon as possible because non-standard code might pollute subsequent model generation.

This is essentially the same as before: you can appropriately "cut corners," but remember to come back and fix them.

Host: Nowadays, many people think the joy of programming is weakened, and developers have become "prompt factories." What's your view? Has AI made you lose interest in programming?

Dax: For me, the answer is no. But I might be in the minority because I run my own company and can choose my work direction autonomously. AI tools allow me to explore new ideas faster and invest time in more creative parts rather than repetitive labor.

But I understand why many people feel frustrated: if you're just assigned tasks, input prompts, wait for results, and don't have more challenging work, it's indeed easy to feel bored. In fact, there's already a lot of repetitive work in programming; now it's just taken over by Agents.

The truly interesting parts—system design, direction judgment, problem definition—are still human-led and often don't happen frequently. Maybe once a month, not every day.

Host: I personally feel AI has actually increased the fun. It lets me focus on higher-level abstractions without getting tangled in syntax details. But I also worry, if we rely too much on tools, will skills degrade?

Dax: This concern is real. I remember being very good at mental arithmetic as a child, but now it has obviously degenerated. Similarly, if relying on AI long-term, certain coding abilities might atrophy. Although in reality this might have limited impact, just like calculators replaced mental arithmetic, this ability gap might show up when encountering complex problems.

The problem is, it's like "the genie is out of the bottle." As long as tools can make people spend less effort, people will keep using them. The key is: is the saved energy used to do more valuable things, or just to scroll TikTok?

I've experienced both states; sometimes I'm very engaged, sometimes I let AI work while I zone out. If this phenomenon happens simultaneously to millions of programmers, how much long-term productivity actually improves is hard to judge. Especially if just working a job one doesn't care much about, it's hard for people to proactively put in extra effort.

Host: Then, does liking to write code become a disadvantage instead? For example, someone might be overly obsessed with technical details while neglecting more important abilities?

Dax: This isn't a new problem; in the past, there were also developers obsessed with technical details while neglecting product and business judgment. Excellent people often know how to balance: when to dig deep into technology and when to focus on product direction.

In my view, programming ability can bring a nice career position, but truly breaking through the ceiling often comes from a second specialty. If you're both an excellent programmer and deeply understand a certain industry (such as finance, healthcare, or energy), then you're in an extremely scarce position. Programmers can enter almost all industries; this is a huge advantage. If you can accumulate deep understanding in your field, you can discover structural opportunities that others overlook.

Host: You've rejected multiple acquisitions and high-salary invitations. Why not choose a more stable path?

Dax: When I was young and saw Snapchat reject Facebook's multi-billion dollar acquisition, I thought the founder was crazy, but later I understood. As your career develops, your "safety net" grows larger. Early on, if there's a good offer, you might not be able to refuse. But when you have accumulation, ability, and a way out, ambition also grows. Accepting an acquisition means giving up your original dream; that feeling of "all visions ending right here" is far stronger than short-term ease. Therefore, unless conditions are extremely favorable, it's hard for me to make such a choice.

Host: Many people consider you an "elite developer." What do you think your core advantage is?

Dax: Frankly speaking, I have quite a few colleagues around me who are stronger than me in technical execution. I may not be the best programmer. My advantage lies more in having a holistic perspective, able to anticipate development trends and make reasonable judgments. My other two co-founders are also good at this; we promote each other. We are committed to sorting out laws in the complex industry environment, distinguishing basic logic that holds long-term from cognition that holds temporarily. The team has invested a lot of time in this, and I often engage in deep discussions with friends on this. This way of thinking is transferable and can be applied to programming, business operations, personal decisions, and talent recruitment. This might be my true advantage.

Host: The abilities you just mentioned, are they innate? Or did you cultivate them deliberately later? If deliberately cultivated, can anyone improve through effort?

Dax: When communicating with top talents in the industry, you feel they seem to have innate talent, but after in-depth communication, you find most started ordinary and gradually improved through continuous investment. For me, the core of this ability lies in truly caring about the ultimate correctness of one's own cognition, not pursuing victory in current debates, that is, building accurate models of the world. If this is the goal, one will deduce the actions required; the core lies in keeping thinking clear, recognizing oneself and one's own insecurities. Often, insecurity affects judgment, making people deliberately adopt one-sided evidence due to subjective expectations; this requires long-term growth accumulation.

When I was young, I lacked a sense of security and had weak problem-viewing abilities. As confidence and achievements accumulated, my thinking patterns also continuously improved. At the same time, one needs to treat received information cautiously, avoiding information overload leading to limited thinking and falling into a single cognitive environment. Maintaining self-awareness and continuous reflection is something that requires long-term persistence. In today's social environment, maintaining clear thinking faces many interferences; truly achieving this requires always adhering to the pursuit of ultimate correctness.

Host: Speaking of recruitment, what's your view on "shortcuts"? For example, are degrees or big company backgrounds important?

Dax: Many people use a credential, like "I worked at Google." From experience, such credentials do bring huge influence. But what makes me uncomfortable is that their influence is amplified. Many people see brands like Google, Meta, Amazon, Apple, and automatically attach certain ability assumptions. Although there is indeed some correlation, it's far from as strong as everyone imagines.

We are in a unique position: our product is for developers and is open source, so potential candidates are often users or contributors. Making high-quality contributions in the chaotic open-source environment is itself an extremely strong filtering mechanism. For the dozen or so people we recently hired, we didn't conduct traditional interviews or look at resumes; we focused more on actual results.

From a macro perspective, large-scale recruitment indeed needs "shortcuts," such as degrees or company backgrounds. But from an individual perspective, this label can bring bias in both positive and negative directions. When I see Google on a resume, I instinctively have negative presumptions. I associate it with the motivations, values, and daily collaboration styles behind their choice of that career path. Similarly, if someone automatically looks up to another person because of this credential, that's also unfair. Ultimately, talking to a person once or twice, you can easily feel their true state. For me, either I'm very excited, or there's no resonance. At our scale, this approach is completely feasible. Ultimately, the most effective way is still to show results. If you're good enough, the world will eventually correct its underestimation of you.

Reference Link:

https://www.youtube.com/watch?v=IGsbARhERqc

Today's Recommended Articles
Cursor Experiences Life and Death
Jensen Huang's GTC 2026 Speech Transcript: All SaaS Companies Will Disappear; Lowest Token Costs Globally; "Lobster" Made History; Feynman Architecture is on the Way
Indispensable to Anthropic Engineers! Open Source Tool Created Casually Late at Night Acquired at High Price by OpenAI, 23-Person Startup Turns the Tables
OpenClaw's Father Amazed by China Speed! Big Tech Flocks to New Battlefield: Using AI to Mass-Produce "One-Person Companies"
Conference Recommendation

OpenClaw goes viral, the "shrimp farming" craze is fervent, and the Agentic AI fire at the start of the year is burning fiercely. Under this trend, self-hosted Agent forms are rapidly popularizing: multi-entry dialogue, persistent memory, and Skills toolchains bring powerful productivity. But behind this also exposes real engineering implementation challenges—permission boundaries and isolated operation, Skills supply chain security, observability and traceability, memory layering and cross-scenario contamination, and how to integrate Agents into team R&D/Ops processes to form stable returns.

Addressing this series of challenges, at the upcoming QCon Beijing station on April 16-18, we have specially planned the "OpenClaw Ecosystem Practice" special topic, focusing on frontline practices and pitfall reviews, sharing how enterprises build private Skills, establish security guardrails, set up audit and playback mechanisms, build quality/efficiency metric systems, and ultimately upgrade self-hosted Agents from usable Demos to reliable production systems.

Image

分享網址
AINews·AI 新聞聚合平台
© 2026 AINews. All rights reserved.