Introduction: This content is compiled from an interview with Boris Cherny, Anthropic Creator and Claude Code Lead on the Lenny's Podcast channel.
Content Summary: Boris Cherny on 'Lenny's Podcast' Discusses Claude Code: What Happens When Coding Problems Are Solved
AI-Driven Software Development Revolution: Boris Cherny believes AI (like Claude Code) has and will continue to radically transform software development. He generates 100% of his code using AI and no longer manually edits code. This shift will increase every engineer's productivity by 200% and may even cause the title "Software Engineer" to disappear, replaced by "Builder." AI Is 'Solving' Coding Challenges: Programming itself is being solved by AI at scale. AI can not only write code but also review feedback, examine bug reports, and even proactively suggest ideas for fixing bugs and releasing new features, gradually evolving into a "colleague" within development teams. 'Latent Demand' Drives Product Evolution: The success of AI products (like Claude Code and Cowork) largely stems from the principle of 'latent demand'—observing how users use tools in unexpected ways, then building products that make it easier to fulfill those needs. For example, people using Claude Code for non-technical tasks led to the creation of Cowork. AI Is Reshaping Non-Technical Work: AI's influence will expand from engineering to roles like product managers, designers, and data scientists—any work related to computers. AI agents with tool-using capabilities will become the next frontier of transformation, enabling non-technical personnel to achieve automation. The Future Key Is Generality, Not Specialization: Boris emphasizes the "Bitter Lesson" principle: general models will always outperform specialized ones. Building AI products should focus on the future by betting on more general models rather than over-optimizing current ones or adding rigid workflows. AI Drives the 'Democratization' of Innovation: Just as the printing press lowered the barrier to knowledge access, AI is doing something similar. In the future, everyone will be able to code, unleashing tremendous creativity, but this will also bring massive disruption and pain that society must address together. Multi-Layered AI Safety Mechanisms: Anthropic employs a three-layer AI safety approach: foundational "alignment" and "mechanistic interpretability" (understanding neuron function), intermediate "evaluations" (testing in lab environments), and top-level "early release" to observe behavior in real-world applications. Embrace AI, Become a 'Generalist': The secret to success in the AI era is to embrace AI tools and become a curious, cross-disciplinary generalist capable of thinking about macro-level problems, not just limited to engineering details. The Experimental Value of Unlimited Tokens: Encourage providing engineers with "unlimited Tokens" so they can freely try out wild ideas. Because in small-scale experiments, the cost of AI usage is relatively low, and its potential may lead to revolutionary breakthroughs.
Content Overview
Boris Cherny is the creator and lead of Claude Code at Anthropic. Just a year ago, this tool was a simple terminal-based prototype; today, it has transformed the role of software engineering and is increasingly reshaping all professional work.
We discussed:
How Claude Code evolved from a quick hackathon project to accounting for 4% of all GitHub public commits, with daily active users doubling last month. The counterintuitive product principles that drove Claude Code's success. Why Boris believes programming problems have been "solved." The "latent demand" that shaped Claude Code and Cowork. Practical tips for making the most of Claude Code and Cowork. Why keeping teams resource-constrained but providing "unlimited Tokens" can yield better results through AI products. Why Boris briefly left Anthropic to join Cursor, only to return two weeks later. The three principles Boris shares with every new team member.
Full Interview
Introduction to Boris and Claude Code
Boris Cherny: 100% of my code is written by Claude Code. Since last November, I haven't manually edited a single line of code. Every day I submit 20 to 30 Pull Requests. So right now, I have about five Agents running simultaneously.
Host: Are we recording? Okay. Do you miss the feeling of writing code by hand?
Boris Cherny: I've never enjoyed programming more than I do now because I no longer have to deal with those trivial details. Every engineer's productivity has increased by 200% because of this.
People always ask: "Should I learn to code?" In a year or two, it won't matter anymore. Because programming problems are largely already solved.
I envision a future where everyone can code. Anyone can build software at any time. What's the next big shift in software development? Claude is starting to proactively suggest ideas. It reviews feedback, looks at bug reports, analyzes telemetry data to fix bugs and release new features. It's becoming more and more like a colleague, or even a group of colleagues.
Host: Product managers are probably sweating hearing this.
Boris Cherny: They should indeed be nervous. I think by the end of this year, everyone will be a product manager and also be able to code. The title "Software Engineer" will start to disappear, replaced by "Builder." For many people, this transition process will be painful.
Host: Today my guest is Boris Cherny, the lead of Claude Code at Anthropic. It's hard to describe in words how much impact Claude Code has had on the world. This episode is being released right around the first anniversary of Claude Code. In such a short time, it has completely transformed how software engineers work. As we'll discuss later, it's also starting to reshape work in many other functions within the tech world.
Claude Code itself has been a huge driver of Anthropic's overall growth last year. The company just raised over $3.5 billion for this. As Boris puts it, Claude Code's growth is still accelerating—last month alone, its daily active users doubled.
Boris himself is also a very interesting, thoughtful, and deep thinker. During our conversation, we were pleasantly surprised to discover that we were actually born in the same city in Ukraine—what a coincidence, I had no idea about this before.
Why Boris Briefly Left Anthropic to Join Cursor (and What Made Him Return)
Host: Boris, thank you so much for joining us, welcome! I want to start with a somewhat sharp question. About six months ago, I'm not sure if people remember, you actually left Anthropic to join Cursor, but returned to Anthropic two weeks later. What actually happened then? I don't think I've ever heard the true version of the story.
Boris Cherny: This was indeed the fastest job change of my career. I joined Cursor because I was a loyal user of the product. Honestly, I met their team and was very impressed. They're excellent, building cool stuff, and seemed to see the future of AI coding earlier than many. So the idea of building a great product was very exciting to me at the time.
But when I actually got there, I started to realize that I truly missed Anthropic's mission. This was also what initially attracted me to Anthropic. Before joining Anthropic, I worked at large companies. Later, I wanted to go to a laboratory-type institution where I could personally participate in shaping the future of this crazy thing we're building.
What attracted me most to Anthropic was its mission—All about safety. When you walk into the hallway at Anthropic and ask anyone why they're here, the answer is always "safety." This mission-driven culture aligns highly with my personal values. I realized that this is not only the motivation for work but also the source of my happiness.
Claude Code's First Anniversary
I truly missed that feeling. I discovered that no matter how exciting the work content or how cool the product, nothing can replace the satisfaction that comes from a sense of mission. For me, this was a very clear signal, and I quickly realized I was missing this piece.
Host: Okay, let's follow this topic back to Anthropic and the work you're doing there. This podcast will be released around the first anniversary of Claude Code, and I'd like to take some time to reflect on the impact you've brought. Recently, SemiAnalysis released a report that I'm sure you've seen. It shows that now 4% of code commits on GitHub are written by Claude Code, and predicts that by the end of this year, this proportion will reach one-fifth of all GitHub code commits. They commented: "In the blink of an eye, AI has devoured all software development."
Just today, the day of our recording, Spotify released a headline saying their best developers haven't written a single line of code since December, all thanks to AI. More and more top senior engineers, including you, are sharing the fact that they no longer write code themselves—all code is AI-generated, and many don't even look at it anymore. This is largely thanks to the small project you started and your team's expansion over the past year. I'd love to hear your reflections on the past year and these impacts.
Boris Cherny: These numbers are simply unbelievable. Having 4% of globally committed code come from Claude Code completely exceeded my expectations. As you said, it feels like just the beginning. These are just public code commits; if you include private repositories, we think the proportion would be much higher. For me, the craziest part isn't the current absolute numbers, but the speed of growth. Claude Code's metrics aren't just growing—they're accelerating.
When I first started working on Claude Code, it was just a small project. Within Anthropic, we generally believed that some sort of coding product should be released. For a long time, Anthropic's mental model for building safe Artificial General Intelligence (AGI) has been: The model must first master programming, then be good at using tools, and finally be able to operate computers. This is roughly our long-term development trajectory.
The team I originally joined was called Anthropic Labs. Mike Krieger and Ben Mann actually restarted this team for the second time. The team developed some great things, including Claude Code, MCP (Model Context Protocol), and the desktop application. You can clearly see the embryonic form of this idea: first programming, then tool use, and finally computer use.
This is crucial for Anthropic, primarily because of safety. Artificial intelligence is becoming increasingly powerful. Over the past year, at least for engineers, AI isn't just writing code or acting as a conversation partner—it's actually starting to use tools and take actions in the real world.
The Origin Story of Claude Code
Through Computer Use, we're also starting to see this shift happen with non-technical personnel. For many people using conversational AI, this may be their first experience with a tool that can actually execute actions. It can access your Gmail, your Slack, and handle various tasks for you. It excels at this and will only get better.
For a long time, there was a feeling within Anthropic of wanting to create something, but the specific form wasn't clear. So when I joined Anthropic, I spent a month doing exploratory development, building many strange prototypes. Most didn't get released, and were far from release, but this was just to explore the boundaries of model capabilities. Then, I spent another month studying post-training to understand it deeply from a research perspective. Honestly, as an engineer, to do good work, you must deeply understand the underlying logic of the layer you're building upon.
In traditional engineering work, if you develop products, you need to understand dependencies like infrastructure, runtime, virtual machines, programming languages, etc. But when working with artificial intelligence, you must understand the model itself to some degree to be competent at your job.
So, I took a bit of a detour to do this, came back, and started building the prototype that eventually evolved into Claude Code.
About its first version, I still have a demo video recorded that summer, when it was called Claude CLI. I just showed how it used some tools. What amazed me most was that I only gave it a Bash tool, and it could use that tool to write code. When I asked it "What music am I listening to?" it actually gave an answer. That was simply magical, right? Because I didn't instruct the model to use specific tools to complete tasks, nor did I tell it to "do anything." After obtaining tools, the model figured out on its own how to utilize them to answer my question—in fact, at the time I wasn't even sure if it could answer the question "What music am I listening to."
So I started doing more prototyping. I wrote an article and posted it internally, but only received two likes. This basically represented the general reaction at the time.
Because at the time, when people thought of coding tools, what came to mind were IDEs (Integrated Development Environments) and those quite complex environment configurations. No one thought this thing could run based on a terminal. At the time, this seemed like a strange design, and it wasn't intentionally planned that way.
But I built it in the terminal from the start for a simple reason: in the first few months I was the only one developing it, and this was the least effort way to build. For me, this was quite an important product lesson: Resource constraints in the early stage are actually an advantage, they help you focus.
Later when we considered developing other forms, we decided to stick with the terminal for a while. The biggest reason was that the model was evolving so fast that we felt any graphical interface (GUI) couldn't keep up with its pace. Honestly, this was also something I kept thinking about: "What on earth should we build?" Over the past year, Claude Code occupied all my thinking. Late at night I often thought: The model is still constantly improving, what should we do? How can we keep up? Honestly, the terminal became my only solution.
It turned out that this path worked. After release, it quickly became popular within Anthropic, with daily active users (DAU) rising straight up. Actually before release, Ben Mann suggested I make a daily active user chart. I asked at the time: "Is it too early to do this now?" He said: "No, right now." The result was that the curve on that chart almost immediately pulled a straight line.
In February, we released it to the public. Actually, people might not remember that Claude Code didn't become an instant hit when it was first released. There was indeed a group of early adopters who immediately understood its value, but in fact the market took a few months to truly understand what this was. Again, it was just too different.
Looking back, part of the reason for Claude Code's success was "latent demand"—we brought the tool to where people were, making existing workflows smoother. But because it ran in the terminal, this was a bit unexpected and even felt somewhat unfamiliar. You had to maintain an open mind to learn how to use it.
Of course, now Claude Code can be used in Claude's iOS and Android apps, desktop apps, websites, IDE extensions, as well as in Slack and GitHub—it has penetrated every corner where engineers are, becoming more familiar. But this wasn't its starting point.
Initially, the fact that this thing "worked" was already a surprise. As the team grew and the product matured, it became increasingly valuable to users. From small startups around the world to well-known large companies, everyone is using it and providing feedback—this has been a very humbling experience. We keep learning from users. The most exciting thing is that no one really knows exactly how to do it, we're all exploring together with users. User feedback best illustrates this; I've been surprised countless times.
How AI Is Reshaping Software Development at What Speed
Host: The speed of change in today's world is simply incredible. You released this product a year ago; although that wasn't the first time people used AI to write code, in just one short year, the entire software engineering industry has undergone earth-shaking changes. For example, earlier there were predictions everywhere that "AI would 100% take over code writing," and at the time everyone thought it was fantasy, saying "Are you kidding?" But now, as you said, all this is happening. The speed of this development and change is too fast.
Boris Cherny: Yes, extremely fast. Looking back at May 2025, at Anthropic's first developer conference "Code with Claude," I gave a short speech. During the Q&A session, someone asked me for my prediction for the end of the year. My answer at the time was: By the end of this year, you might no longer need an IDE to write code, and we'll start to see engineers abandon this way of working. I remember the entire audience gasped.
Experimental Spirit: The Core of AI Innovation
This sounds like a crazy prediction. But at Anthropic, our way of thinking is exponential, it's deeply rooted in our genes. Three of our co-founders are the first three authors of the "Scaling Laws" papers. So we indeed habitually think exponentially. If you observe the exponential growth curve of Claude's code-writing proportion at the time, just extend the trend line, it's obvious we would break 100% by the end of the year—even though this was completely counterintuitive at the time. All I did was extrapolate along that line. Sure enough, by November, for me personally, that moment actually arrived and has continued since then. We've also seen many clients experience similar situations.
Host: The exploration process you just shared—that mindset of "just messing around, seeing what happens"—I find very interesting. This also often appears in OpenAI's story, like Peter just trying things out and then facilitating certain breakthroughs. It feels like this might be a core element of many major innovations in the AI field: people just sit down and try various methods, pushing models further than others.
Boris Cherny: This is the nature of innovation; you can't force it. Innovation has no established roadmap; you just need to give people space. You need to give them "psychological safety" so they know failure is acceptable. 80% of ideas might be unreliable, and that's fine.
Of course, you also need some accountability mechanisms. If an idea doesn't work, cut losses in time and move to the next one, rather than pouring in endlessly like a bottomless pit.
In the early days of Claude Code, I had no idea whether this thing would be useful. Even when it was released in February, it might only have been writing 20% of my code. Even by May, it might only have been 30%; at the time I was mainly using Cursor to write code. It wasn't until November that it broke 100%. So this did take a while.
Boris's Current Status: 100% Code Written by AI
But from the very beginning, I felt I had caught on to some trick. I was tinkering with it almost every night and weekend—luckily, my wife was very supportive. I vaguely felt I had caught on to something, although it wasn't clear at the time. Sometimes, when you discover a loose thread, you must pull on it.
Host: So, you mean that now 100% of your code is written by Claude Code, right? This is your current coding state?
Boris Cherny: Yes, 100% of my code is written by Claude Code. I'm a quite prolific programmer, whether when I was at Instagram or now at Anthropic, I'm one of the engineers with the highest output.
Host: Wow, even as a lead?
Boris Cherny: Yes, I still write a lot of code. I submit about 10, 20, or even 30 Pull Requests (PR) every day. Every day. Yes, this is crazy. All code is 100% written by Claude Code; since November, I haven't manually edited a single line of code.
The Next Frontier
Of course, I do look at code. I don't think we've reached the stage where we can completely let go yet, especially when many people are running the program, you must ensure its correctness and safety. At Anthropic, Claude automatically reviews all code (Code Review), covering 100% of Pull Requests. Afterward, there's still a layer of human review. We need these checkpoints, except for purely prototype code that won't run in production environments.
Host: So what's the next frontier? Now 100% of your code is written by AI, which is clearly the future of software engineering. This was once a crazy milestone, now it feels like "of course, that's how the world should be." So what's the next big shift in software writing? Is it something your team is already working on, or something you think we'll develop toward?
Boris Cherny: I think one thing that's happening now is that Claude has started to proactively suggest ideas. Claude looks at feedback, bug reports, telemetry data, etc., then starts to propose suggestions for fixing bugs and releasing new features. It's starting to become a bit like a colleague.
The second thing is that we're starting to expand beyond pure coding. I think it's now safe to say that coding is largely a solved problem—at least for the type of programming I do, Claude is fully capable. So now we're starting to think: "What's next? What else?" There's a lot of work related to coding.
I think this trend is coming soon, but it also includes some general tasks. For example, I now use Claude every day to handle various tasks unrelated to coding and let it complete them automatically. For instance, a few days ago I had to pay a parking ticket, and I just let Claude pay it for me. All my team's project management work is also done by Claude; it can synchronize data between spreadsheets, and communicate with people in Slack, email, etc. I think the next frontier is no longer coding itself. Because in my view, coding problems are basically solved. In the coming months, the entire industry will witness that coding work will become increasingly easier to solve, regardless of codebase or tech stack.
Host: The idea of using AI to assist in ideation for work content is very interesting. Many listeners are product managers who might be anxious about this. How do you use Claude? Is it just conversation, or are there clever ways to use it to brainstorm building content?
Boris Cherny: Honestly, the simplest way is to open Claude Code and let it read a Slack thread. For example, we have a channel specifically for collecting internal feedback on Claude Code. Since the initial release, or even during internal testing in early 2024, massive amounts of feedback have accumulated there. This is priceless.
In the early days, whenever someone gave feedback, I would fix it immediately, even within one minute or five minutes. This extreme rapid feedback loop greatly encourages people to provide more opinions.
This is crucial because it makes users feel valued. Usually if you give feedback and it falls into a void, there won't be any follow-up. But if people feel their voices are heard, they're willing to contribute strength to help improve the product.
Now I still follow this principle, but most of the work is done by Claude on my behalf. I just point it to that channel, and it says: "There are a few things here I can handle, I've submitted several PRs (pull requests), want to see them?" And I say: "Okay."
Host: Have you noticed significant progress in AI's coding capabilities? This is the "Holy Grail." Currently, the infrastructure problem of software construction has been solved, and code review has become the new bottleneck—there are PRs waiting to be processed everywhere, who will review them? The core problem now becomes that humans need to think about what to build and how to prioritize. As you said, Claude Code is starting to play a role in this aspect. How is its performance improvement trajectory? For example, how do new versions perform in these areas?
Boris Cherny: Yes, the progress is very large. Part of this is due to our specific training for coding. Obviously, it's the world's best coding model, and it's constantly evolving; the current version performs very well.
Interestingly, much of the non-coding training also transfers well. This "transfer effect" means that if you teach the model to do X, it will also do better at Y. The magnitude of progress is simply incredible.
For example, at Anthropic, in the year since we introduced Claude Code, our engineering team size has roughly quadrupled. But measured by PR count, each engineer's productivity has increased by 200%. For anyone in the developer productivity field, this number is astounding.
When I worked at Meta, I was responsible for code quality for the entire company, involving massive codebases for Facebook, Instagram, WhatsApp, etc. We invested hundreds of engineers and spent a year, often only increasing productivity by a few percentage points. And now seeing this multiple-fold growth is simply unbelievable.
The Double-Edged Sword of Rapid Innovation
Host: Equally unbelievable is how normalized all this has become. It's easy for us to take for granted the unprecedented changes AI has brought to software development, product building, and the entire tech world. But realizing how "crazy" all this is is crucial.
Boris Cherny: I also need to remind myself of this from time to time. Rapid model updates also have downsides, causing me to sometimes stick to old ways of thinking. I find that new team members or recent graduates in the team view things in an even more "AGI-oriented" way than I do.
For example, a few months ago, I encountered a tricky memory leak problem—Claude Code's memory usage kept soaring until it crashed. This is a classic engineering problem that most engineers have debugged countless times. The traditional approach is to take a heap snapshot, put it in a debugger, and analyze it with specific tools.
Just as I was burying my head studying stack traces, a new engineer on the team directly asked Claude: "Hey, seems like there's a memory leak, can you find the cause?" Claude Code then executed exactly the same tasks as me: took the snapshot, wrote an ad-hoc program to analyze it. It found the problem and submitted a fix PR much faster than I was still manually investigating.
Core Principles of the Claude Code Team
This incident reminded us to keep up with the times and not stay in the past. The model is constantly evolving, it's no longer the previous version, and new models require us to completely change our way of thinking.
Host: I heard you have some specific team guiding principles. One seems to be: "What's better than doing it yourself? That's letting Claude do it." This sounds exactly like the memory leak case you just mentioned—you almost forgot this principle, which is "first see if Claude can handle it."
Boris Cherny: There's another interesting point: when you keep the team in a slightly "resource-constrained" state, people are forced to innovate. We often see that even if a project is assigned only one engineer, they can deliver quickly because this is intrinsic motivation—if you have a good idea, you'll find ways to implement it, this doesn't need to be forced.
So, if you have Claude, you can use it to automate a lot of work. Therefore, one of our principles is: Maintain moderate "resource constraints."
Another principle is to encourage rapid action. If it can be done today, never drag it to tomorrow. This was crucial in the early days when I was alone, because speed was our only advantage. Now, to be fast, the best way is to let Claude do more things.
Host: The "resource-constrained" point is interesting. Usually people think AI will make you need fewer engineers. But your meaning seems to be: rather than AI making you faster, investing less human反而能迫使 you dig more value from AI tools.
Boris Cherny: Right, excellent engineers, once empowered, can always find solutions. My advice to CTOs is often: In the early stage, don't try to optimize costs; provide engineers with unlimited Tokens as much as possible.
At Anthropic, each of us has massive Token quotas. We're starting to see this becoming a perk in some companies—"unlimited Tokens" upon joining.
Why Give Engineers "Unlimited Tokens"
I strongly recommend doing this because it liberates people to try ideas that seem crazy. If the idea works, then consider how to scale and optimize costs (for example, by switching to Haiku or Sonnet models). But in the early stage, you must invest a lot of Tokens to give engineers the freedom to validate ideas.
Host: So be tolerant of Token costs. The most innovative ideas often come from someone pushing them to the extreme and exploring boundaries.
Boris Cherny: Exactly. In fact, Token costs for small-scale experiments are negligible compared to engineer salaries or operating costs. Only when a project scales up does the cost become significant, and that's the time to optimize. Don't optimize prematurely.
Host: You mentioned situations where Token costs might be higher than salaries? Do you think this will become a trend?
When Token Costs Exceed Engineer Salaries
Boris Cherny: At Anthropic, we've already seen individual engineers with monthly Token spending up to hundreds of thousands of dollars. This trend is indeed starting to appear.
Host: Do you miss writing code? As a software engineer no longer coding personally, do you feel a sense of loss?
Boris Cherny: Interestingly, my process of learning engineering was very practice-oriented. I learned it with the original intention of being able to build things with my own hands. I'm a self-taught engineer—I majored in economics in school, not computer science. But I started self-learning engineering very early and began programming in middle school.
From the beginning, programming was very practical for me. I even learned to write code to cheat on math exams. That was the first thing I did: I wrote a program on our graphing calculator (TI-83 Plus). Yes, I hardcoded the answers into the program.
By the second year's math exam, the questions were too hard and I couldn't predict the questions to store answers. So I had to write a small general-purpose solver to handle algebra problems. Later I found I could use a data cable to share the program with others in the class, and the whole class got As. Of course, later we were caught, and the teacher strictly forbade it. But from then on, I realized: Programming is a means to build things, not an end in itself.
Later, I was once obsessed with "the beauty of programming." I wrote a book about TypeScript and organized what was then the world's largest TypeScript meetup, purely because I fell in love with the language. I also delved into functional programming.
I think many programmers easily get distracted by this. I also felt that programming, especially functional programming and type systems, had an intrinsic beauty. Like that special excitement when solving complex math problems, when you balance type definitions or write elegant code, you have the same feeling.
But that's not the ultimate goal. For me, code is ultimately a tool, a means to achieve goals.
Of course, not everyone thinks this way. For example, Lena, an engineer on our team, still hand-writes C# code on weekends, purely because she enjoys it. Everyone is different. I think even as the field changes, there will always be a corner for people to enjoy the art of programming and polish their craftsmanship.
Host: Are you worried about your skills as an engineer deteriorating? Or do you think this is just the inevitable development?
Boris Cherny: I think this is the inevitable development, and personally I'm not worried. For me, programming is a continuous evolutionary process.
Looking back at history, modern software writing (running on virtual machines) began around the 1960s, 60 years ago. Before that were punch cards, and before that toggle switches, and earlier still pure hardware. Before that, so-called "computing" was a room full of people doing math with paper and pen.
Programming has always been changing. Although understanding underlying principles makes you a better engineer, which may still hold true for the next year or two, I think it will soon become unimportant. It will become hidden from programmers' view like assembly code today.
Emotionally, I'm used to constantly learning new things. As a programmer, facing new frameworks and new languages is routine. But not everyone can adapt. Some may feel strong loss, nostalgia, or even anxiety about skill deterioration.
Host: Did you see Elon Musk's point: why can't AI directly generate binary code? Ultimately, if AI can do that, what's the point of these intermediate programming abstraction layers?
Boris Cherny: Yes, that's a good question. Theoretically, it can absolutely do it.
AI Impact Like the Printing Revolution
Host: I often hear people ask: "Should I learn to code? Should schools still teach coding?" From what you're saying, it seems that in a year or two, this becomes less necessary?
Boris Cherny: My view is: Today, people using code or AI Agents still need to understand underlying principles. But in a year or two, this indeed won't matter anymore.
We need to consider this in a historical context. The closest analogy is the printing press.
In mid-15th century Europe, literacy was extremely low, only about 1%. Scribes monopolized reading and writing, employed by often illiterate nobles. Within 50 years of Gutenberg's printing press, the number of printed materials exceeded the total of the previous thousand years, and costs dropped by about 100 times.
The rise in literacy took time because learning to read and write is hard and requires an education system and leisure time, not being tied up by farm work all day. But in the following 200 years, global literacy eventually climbed to about 70%.
We may be experiencing a similar transition. There's an interesting historical anecdote about a 15th-century scribe: he was excited about the printing press because he actually hated the tedious copying work. What he truly loved was drawing illustrations and bookbinding. The printing press allowed him to be freed from tediousness and focus on more artistic craft aspects.
As an engineer, I empathize. I no longer need to deal with tedious coding tasks—like managing Git and various tool configurations, those details are time-consuming and not enjoyable. The true joy lies in envisioning what we want to build. Interacting with users, designing large systems, thinking about future architecture, collaborating with the team—these are where I can now invest more energy.
Host: It's amazing that the tool you built also enables non-technical personnel to do this. I used to do small projects, and when I encountered difficulties, I just said "help me solve this," and AI could handle it. Looking back at my 10 years as an engineer, I spent a lot of time dealing with libraries and dependencies, and when I encountered problems, I had to search on Stack Overflow. Now, AI directly gives steps "1, 2, 3, 4," and the problem is solved.
Which Roles Will AI Disrupt First?
Boris Cherny: Exactly. Today I was chatting with an engineer who spent a month writing a service in Go that works well. But he told me: "Actually, I still don't really understand Go." This situation will become more and more common. As long as you can confirm it runs correctly and efficiently, you no longer need to delve into every implementation detail.
Host: The work of software engineers has undergone earth-shaking changes. In your view, in technical fields (like product managers, designers) or even non-technical fields, what will be the next most impacted role?
Boris Cherny: I think roles adjacent to engineering (like product management, design, data science) will bear the brunt first. In fact, this impact will extend to almost any work that relies on computers, because model capabilities are increasingly strong.
Collaborative products are just the first step. They brought AI to people who had never touched it before. Looking back a year ago, no one in the engineering world really understood "Agents," but now, this has become our work norm.
Observing current non-technical or semi-technical work, people are mainly still using conversational AI (chatbots) and haven't really touched "Agents." Although "Agent" is now an overused term, it has a specific technical meaning: a large language model (LLM) that can use tools. It's not just for chatting, but can take actual actions and interact with systems—operating Google Docs, sending emails, running commands on a computer. Any work that uses computer tools in this way will be the next area to be changed.
This is both a social issue and an industry issue. This is also why I feel urgency working on this at Anthropic. We take this very seriously, hiring economists, policy experts, and social impact experts for in-depth discussion. Society needs to figure out how to deal with this, because it shouldn't be decided by just one company.
Host: You mentioned employment issues. There's a concept called "Jevons paradox," which means that increased efficiency may actually increase demand for resources (or labor). In the current context of AI's deep involvement in engineering, how is your team's recruitment going?
Boris Cherny: Actually, our Claude Code team is actively hiring. If interested, welcome to check Anthropic's careers page.
Personally, I've never enjoyed programming more than I do now because I'm freed from trivialities. Many clients feel the same way; they love Claude Code because it makes coding enjoyable again. But it's hard to predict how things will ultimately evolve, so I tend to draw lessons from history. The advent of the printing press is an excellent analogy: this technology was once only mastered by a few people (like literacy), then became accessible to everyone. This is essentially democratization. If not for this, the Renaissance would have been impossible—after all, the core of the Renaissance was the dissemination of knowledge and exchange of ideas. At that time, there were no telephones, no internet, everything relied on written records.
So, the key is "where will this lead." I'm optimistic about this, even incredibly excited. If the printing press hadn't been invented, we couldn't be sitting here talking today, microphones wouldn't exist, and everything around us wouldn't exist. Without this technology, it's impossible to coordinate human collaboration on such a large scale. I envision a world a few years from now where everyone can code. What potential will this release? Anyone can build software at any time. Just as people in the 15th century couldn't imagine our lives now, we also can't foresee that picture then.
The Secret to Success in the AI Era
But I also think the transition period will be full of disruption and even painful for many. Again, as a society, we must have dialogue and find a way out together.
Host: Since we're about to enter this crazy period of turmoil, for those who want to succeed, what advice do you have? Play more with AI tools, master the latest technology? What else can help people stay ahead?
Boris Cherny: Right, the most basic thing is to try, to understand these tools, don't be afraid, just directly explore their cutting-edge applications.
Second, try to make yourself a generalist. In school, many computer science students only learn programming and ignore other fields, like system architecture. But the most efficient engineers and product managers around me all have cross-disciplinary abilities. For example, the Claude Code team—we all write code, product managers, engineering managers, designers, financial people, even data scientists, everyone writes code.
Looking at specific engineers, the most outstanding ones are also versatile: hybrid engineers who understand both product and infrastructure; engineering product managers with excellent design sense; or engineers with deep business understanding who can determine technical direction based on it; or those who love communicating with users and can truly understand user needs to determine the next direction. So I think that in the coming years, those who will benefit most aren't just the native generation proficient in AI tools, but curious generalists who can cross disciplinary boundaries. They not only think about engineering problems but also focus on the big picture to solve more macro-level challenges.
Host: Do you think the division of these three independent disciplines—engineering, design, product management—remains valid? Although everyone is now writing code and contributing ideas, do you think these roles will continue to exist?
Boris Cherny: Long term it's hard to say, but in the short term they will continue to exist. However, we're starting to see 50% overlap between roles, and many people are actually doing the same things. Although some still have specializations—for example, I write more code, while other PMs handle more coordination, planning, or forecasting—the boundaries are blurring.
I think that in the future, perhaps by the end of the year, we'll see these boundaries become even more blurred. The title "Software Engineer" may gradually disappear, replaced by "Builder." Perhaps everyone will become a product manager who both writes code and understands product.Poll: Which Role's Work Became More Interesting Thanks to AI?
Host: You mentioned that you now enjoy writing code more. Actually, I did a poll on Twitter about different roles: "Do you enjoy work more after using AI tools?"
The results were interesting: 70% of engineers and product managers (PM) said they enjoyed work more, and only about 10% said they didn't. However, among designers, only 55% said they enjoyed work more, while 20% said they enjoyed it less. I think this difference is very interesting.
Boris Cherny: That's too interesting. I'd love to chat with these people to understand the reasons behind it. Did you have any follow-up?
Host: Several people replied, and we're conducting follow-up interviews; I'll put the link in the show notes. But this is affected by many factors. Those designers who said they didn't enjoy it didn't share many specific reasons, so I'm curious about what happened.
Boris Cherny: At Anthropic, I have a different experience. Our team has a strong technical background, and we emphasize this in hiring. Even for non-technical positions, people go through many technical interviews.
Our designers also write code. From my observation, they like this because now they don't need to bother engineers for everything and can write code themselves to solve problems. Even some designers who didn't use to write code have started trying. This is great for them because it empowers them to solve problems independently.
But I'd love to hear more people's experiences because I believe different companies' situations are definitely not the same.
The Principle of Latent Demand in Product Development
Host: Right. If you're listening to this show and find work has become less interesting, please leave a message and let us know. Because most feedback indicates work has become more interesting (70% of PMs and engineers), if you're not in this group, maybe something is wrong.
Boris Cherny: Yes. We also observe that people use different tools. For example, our designers use the Claude desktop app more to write code.
You just download the desktop app, and there's a "Code" tab right next to "Collaborate." It essentially has all the features of Claude Code, it's a powerful agent. This way you don't need to open a bunch of terminal windows, and it's more in line with non-engineers' habits.
Most importantly, you can run any number of Claude sessions simultaneously, which we call "Multi-Claude."
This is actually about how to bring the product to users. You don't want to force users to change their workflows or incur extra learning costs. If you can make their existing operations easier, that's a better product. This touches on one of the core principles in product development—the principle of latent demand.
Host: Can you elaborate? Because I actually wanted you to explain what this principle is and what happens when you unlock this latent demand.
Boris Cherny: Latent demand means: if users "hack" or use your product in an unexpected way to achieve a certain purpose, this actually points you in the direction of the product's next evolution. Fiona, the founding manager of Facebook Marketplace, often mentions this example.
The birth of Facebook Marketplace stemmed from an observation around 2016: 40% of posts in Facebook groups were buying and selling information. This was amazing. Users were "abusing" the group feature to conduct transactions. This isn't abuse in the safety sense, but rather that no one designed this feature for it, but users found a way themselves because it was too useful.
Obviously, if you build a better product to specifically serve buying and selling needs, users will like it. The success of Marketplace is obvious. The first step was buy and sell groups, the second was Marketplace.
The birth of Facebook Dating was also based on similar insights. If you look at profile views on Facebook, you'll find that 60% of views come from opposite-sex non-friends. This is like a traditional dating model where people are just "peeking" at each other. So building a dedicated product might work.
I think the concept of latent demand is very powerful. For example, the birth of Cowork also came from this. We noticed that in the past six months, many people using Claude Code weren't using it to write code. Some used it to grow tomatoes on Twitter, some to analyze genomes, some to recover wedding photos from damaged hard drives, others to analyze MRI images. There were all kinds of completely unrelated use cases. This showed that people were willing to do these things even if it meant struggling with the terminal. Maybe we should build a dedicated product for them. We actually noticed this quite early. Around last May, I walked into the office and saw Claude Code open on our data scientist Brendan's computer. He just had a terminal open. I was surprised and asked: "Brendan, what are you doing? You actually learned to use the terminal? You know, this is a very engineering-heavy tool, because it's too low-level and tedious, even many engineers don't like using it."
But not only did he learn it, he downloaded Node.js and Claude Code and directly did SQL analysis in the terminal. This is crazy. The next week, all data scientists were doing the same thing.
When you see people using products in this "hacked" way, completing beneficial work with unexpected methods, this strongly suggests you should build such a product—people will like it because it has a clear purpose. I think "latent demand" now has an interesting second dimension.
The traditional understanding is: observe what people are doing, make their work easier, empower them. What I've experienced in the past six months is a slightly different modern understanding: It's about observing what the "model" is trying to do and making that easier.
When we started building Claude Code, the mainstream way people designed LLM applications was to "lock the model in a box." They would say: "I want to build this application to complete this task. Model, you are only responsible for one component, interacting through these APIs and tools."
For Claude Code, we did the opposite. We advocate: Product is model. We want to present it, building the minimal scaffolding around it, giving it the most streamlined toolset so it can do what it needs to do. It can autonomously decide which tools to run and in what order.
How Cowork Was Built in Just 10 Days
This is largely based on the latent demand of "what the model wants to do." In research, we call this "on distribution." You need to observe the model's behavior. In product terms, this is the same concept as "latent demand," just with the model as the object of application.
Host: You mentioned Cowork. Remember when it was first released, you said the team completed it in just 10 days, which is incredible. Shortly after release, millions of people used it. For a product like this to be built in 10 days, aside from the fact that it was built with Claude Code, is there any other story behind it?
Boris Cherny: Yes, this is interesting. Claude Code didn't become an instant hit when it was first released. Its popularity accumulated over time, experiencing several turning points. For example, the release of Claude 3 Opus really made it take off. By November, the growth curve became very steep. But in the first few months, many people didn't know how to use it, nor did they understand its purpose, and the model wasn't perfect enough at the time.
In contrast, Cowork became a hit as soon as it was released, with popularity far exceeding early Claude Code. This is largely due to Felix, Sam, Jenny, and the entire powerful team. The birth of Cowork also stemmed from that latent demand—we saw people using Claude Code for non-technical things, so we tried to figure out how to deal with it.
The team explored for a few months, trying various solutions. Finally, someone proposed: "What if we put Claude Code into the desktop app?" This worked. So in 10 days, they built Cowork entirely using Claude Code.
Cowork actually has a very complex security system, essentially protective measures to ensure the model behaves correctly and doesn't go off track. For example, we built a complete virtual machine, and all this code was written by Claude Code. We had to consider: "How to make this both safe and autonomous for non-engineer users?" This process, implemented entirely with Claude Code, took only about 10 days.
Anthropic's Three-Layer AI Safety Mechanism
We released very early, and it's still rough in details. But this is how we learn—whether in terms of product or safety, we must release earlier than expected to get feedback, communicate with users, and truly understand what people want. This will determine the future evolution of the product.
Host: This is very interesting and unique. People often say "release early, iterate from user feedback," but given how hard it is to predict AI capabilities and how people will use it, this becomes a unique reason for early release. As you said, this helps discover those "latent demands" we don't yet understand, so go ahead and push it out.
Boris Cherny: Right, what exactly will people do with it? For a safety lab like Anthropic, another dimension is safety. There are many different ways to research model safety, roughly divided into three layers:
The bottom layer is "alignment" and "mechanistic interpretability." When training the model, we need to ensure its safety. Now we have quite advanced technology to track and understand activity inside neurons. For example, if there's a neuron related to "deception," we can monitor its activation. This is the bottom layer of alignment and mechanistic interpretability.
The second layer is "evaluations." This is essentially the laboratory environment, where the model is like in a petri dish. We place it in synthetic scenarios to test whether it complies and is safe.
The third layer is observing the model's behavior in the real world. As models become increasingly complex, this is crucial because the model may perform well in the first two layers but fail in the third. We released Claude Code early exactly to research safety. We used it internally for about four or five months before releasing it publicly because at the time it was one of the first large Agents, perhaps even the first widely used coding Agent, and we weren't sure if it was safe. We did extensive internal research until we felt it was okay to release. Even then, what we learned about alignment and safety fed back into the model and product.
Cowork's situation is similar. The model is in a new environment, executing non-engineering tasks as an Agent. It performed well in internal testing and evaluations, but is it safe in the real world? This is why we released it early and called it a "research preview." This is the only way to ensure continuous improvement so the model meets requirements long-term and does the right thing.
Host: This is a crazy field with fierce competition and extremely fast pace. At the same time, people worry that some sort of "superintelligence" will get out of control and cause damage. Finding the balance must be extremely difficult.
As I understand it, you handle safety issues through these three layers: observing how the model thinks, testing whether the model does evil through evaluations, and releasing early. About the first layer (mechanistic interpretability), I'd like to hear more details. You seem to have observability tools that can peek inside the model, observe its thought process and direction, which is incredible.
Boris Cherny: Yes, you should invite Chris Olah to your podcast; he's the industry expert in this field and pioneered "mechanistic interpretability." The core idea is simple: just as the human brain is a bunch of interconnected neurons, we can study the brain mechanistically to understand neuron functions.
Surprisingly, this applies largely to models as well. Although model neurons differ from biological neurons, there are many similarities in behavior. We've learned a lot from this: how concepts are encoded, how models plan, how they think ahead. Before, we weren't sure if models were just predicting the next word, but now there's strong evidence that they're doing something deeper.
As models get larger, the structure becomes quite complex. It's no longer one neuron corresponding to one concept, but one neuron may correspond to a dozen concepts if activated together with other neurons, which is called "superposition." This is also an area we're constantly exploring.
The reason Anthropic exists is to advance this field in a way that is safe and beneficial to the world. This is also why everyone gathers here. Therefore, we open source a lot of our work and publish it publicly, hoping to inspire other laboratories to also adopt safe approaches.
Anxiety When AI Agents Stop Working
Taking Claude Code as an example, when we released it, we open sourced a sandbox. This is a safety boundary ensuring the Agent cannot access everything on your system. This sandbox applies to any Agent, not just Claude Code. This is like the "competition" principle mentioned earlier: we want to make it easier for others to also achieve safety. This is our lever to push the industry forward positively.
Host: That's great. I'd absolutely love to spend more time discussing this point and will also follow up on that suggestion. I've observed in this field that engineers, product managers, and others collaborating with Agents feel anxiety when Agents stop working. It feels like: "Oh god, there's a problem here I need to solve" or "it's stuck," or maybe feeling "I'm losing productivity." It feels like you have to wake up immediately and get it running again. Do you have this feeling? Does your team? Do you think this is a problem worth paying attention to and thinking about?
Boris Cherny: I always have a bunch of Agents running. Right now I have five running. Anytime I wake up, I'll start a batch.
The first thing I do when I wake up is think: "I need to check this thing." So I open the Claude iOS app on my phone and look at the code tab and Agent status, etc. Because yesterday I wrote some code and thought at the time: "Wait, am I doing this right?" A bit hesitant, but the result was indeed right. Now doing this is too easy. So I'm not sure, maybe there is indeed a little bit of anxiety.
I personally don't feel it strongly because my Agents are always running. Also, I'm no longer limited to the terminal. Now about a third of my code is done in the terminal, a third using the desktop app, and a third via the iOS app. This is surprising because I completely didn't expect this to become my coding method, even when looking ahead to 2026.
Host: It's interesting that you still call it "coding," even though you're just talking with Claude Code and having it write code for you. The interesting thing is that this is now called "coding"—the current definition of coding is describing what you want, not writing code by hand.
Boris Cherny: I'm curious how people who used to code with punch cards or other methods would feel if they saw today's software development approach. I remember reading some material, maybe very early ACM magazines, where someone said: "No, this is different, this doesn't count as real coding." At that time they called it "Programming." I think "Coding" is a relatively new word.
This reminds me of something. I was born in Ukraine, from the Soviet Union. My grandfather was actually one of the earliest programmers in the Soviet Union, and he programmed with punch cards. My mother told me these stories when raising me: when she was little, my grandfather would bring home thick stacks of punch cards. For her, that was a toy for drawing with crayons, a childhood memory; but for my grandfather, that was his programming career.
Boris's Ukrainian Roots
He actually never saw this transformation in the software industry. But at some point, the transformation did happen. I think maybe there was a generation of older programmers who looked down on new software a bit, they would say: "This doesn't count as real coding." But I think this field has always been experiencing such evolution.
Host: You might not know, but I was actually also born in Ukraine.
Boris Cherny: Really? When? I'm from Odessa.
Host: Oh, me too.
Boris Cherny: Yes, that's crazy. Wow, unbelievable. What a coincidence. Maybe it's somewhat connected. When did your family leave?
Boris Cherny: We came in '95.
Host: Okay. We left in '88, a bit earlier. If we hadn't left, how different life would have been.
Boris Cherny: Yes, I feel very lucky every day to have grown up here.
Host: Yes, whenever my family encounters a small matter, they say: "Fortunately we went to America." I just want to say: "Okay, stop saying that." But once you start really thinking about the possibility of another life...
Boris Cherny: Yes, indeed. We do the same thing, but with vodka.
Advice for Building AI Products
Host: Definitely still vodka. Oh my god. Okay, let me ask you a few more questions. You shared some wonderful tips on how to maximize the use of AI, how to build AI, and how to build great products based on AI. One point you mentioned is to give the team unlimited Tokens and let them experiment. Another general piece of advice is: build in the direction the model is developing, not based on today's model. For those trying to build AI products, what other advice do you have?
Boris Cherny: I can share some other perspectives. First, don't try to "box in" the model. Many people, when using models, instinctively try to make them very specific, viewing them as "one component in a larger system." For example, people add very strict workflows, requiring the model to complete step one, step two, step three in sequence, and use a sophisticated orchestrator to execute.
But in reality, if you just give the model tools, set a goal, and let it solve it itself, you almost always get better results. A year ago, you did indeed need a lot of this "scaffolding," but now you need much less of it.
This principle is a bit like "don't ask what the model can do for you, but think about how to provide the model with tools to complete the task." Don't over-plan, don't try to box it in, and don't stuff massive context into it at the start. Give it tools, let it obtain the context it needs itself, and you'll get better results.
Second, perhaps a more general version of this principle, is "The Bitter Lesson." The Claude Code team greatly admires the blog post written by Richard Sutton about ten years ago.
His core point is: more general models will eventually outperform more specific models. This has many implications, but the most important one for me is: always bet on more general models. In the long run, don't try to complete tasks with small models, and don't try to fine-tune.
Although some specific applications have reasons to do so, if you have flexibility, you must bet on general models. Those workflows that add "scaffolding" around the model usually only bring 10% to 20% performance improvements, and these improvements are often instantly wiped out by the next generation model's native capabilities. So the best strategy is often to wait for the next generation model.
This might be the final principle, and it's something Claude Code did right looking back: from the very beginning, we committed to building for the model six months out, not for today's model.
In the product's early days, it wrote very little code because I didn't trust it then. That was the Sonnet 3.5 or similar version era, when the model's code-writing abilities were still in an early stage. At that time, it could indeed help with some Git operations or automation tasks, but didn't actually take on a lot of coding work.
Claude Code's bet was: someday the model would be powerful enough to directly write a lot of code. When we first saw Opus 4 and Sonnet 4, the turning point appeared. Opus 4 was the first ASL3-level model we released last May. From then on, people really started using Claude Code, and our growth exploded exponentially, continuing to this day.
This is also advice I give many startups, even though it feels uncomfortable because your product-market fit (PMF) might be poor in the first six months. But if you build the product for the model six months out, when that model arrives, you'll immediately seize the opportunity and the product will take off quickly.
Host: When you say "build for the model six months out," what should people assume will happen? That it will become universally stronger? Or assuming it will solve current weaknesses? Any specific advice on this?
Boris Cherny: This is a good question. Obviously, inside AI labs, we can see exactly where it's getting stronger, which is a bit like "cheating." But there are also some general predictions.
The first prediction is: models will become increasingly better at using tools and operating computers.
Another prediction is: they'll become increasingly better at running for long periods. There's a lot of research on this, but I can also see it from my own experience.
A year ago when using Sonnet 3.5, it might only run for 15 to 30 seconds before going out of control, and you had to constantly guide it to complete complex tasks.
But now, using Opus 4.6, it can on average run unattended for 10 to 30 minutes. I can start a task and then go do something else. As I said, I always have many tasks running in parallel.
Pro Tips for Using Claude Code
They can even run for hours, days, and in some cases even weeks. I think as time goes on, models running independently for long periods will become the norm, and you won't need to watch them anymore.
Host: We just discussed tips for building AI products. So for Claude Code beginners, or users who want to level up, what advice do you have? Can you share a few pro tips?
Boris Cherny: First, let me clarify: there's no "one right way" to use Claude Code. Although I can share some tips, this is after all a development tool. Developers vary widely, and everyone has different preferences and environments, so usage methods are also diverse. You must find your own path. Fortunately, you can directly ask Claude Code questions. It can provide suggestions and even help modify settings for you. To some extent, it's self-aware and can help you.
However, I've found a few tips that are usually very effective.
First tip: Use only the strongest model. Currently that's Opus 4.6. I always keep "maximum effort mode" on. Sometimes people try to use cost-effective models like Sonnet to save money, but because they're not smart enough, they often end up consuming more Tokens to complete the same task. So low-cost models don't necessarily save money. On the contrary, using the most capable model usually costs less and consumes fewer Tokens because it can complete tasks more agilely, requiring less correction and guidance.
Second tip: Use Plan Mode. I start about 80% of my tasks in plan mode. This is simple; just add the sentence "Please don't write any code for now" to your prompt.
For terminal users, press Shift-Tab twice to enter plan mode; the desktop app and web version also have dedicated buttons; mobile support is coming soon. We also just launched this feature for Slack integration.
Essentially, the model first communicates its thought process with you. Once the plan is confirmed, you then let the model execute. With a solid plan plus Opus 4.6's capabilities, it can almost always complete the task correctly in one go, and I just need to accept the edits.
Third tip: Try different interfaces. Many people think of Claude Code and think of the terminal. Indeed, we perfectly support all terminal environments like Mac, Windows. But we also support other forms, like iOS and Android apps, desktop apps, and Slack integration, etc. Wherever you run, behind it is the same Claude Agent. So I suggest everyone try more to find the workflow that suits you best.
Regarding Codex and Industry Competition
Host: That's great. I have a few more closing questions. How do you view Codex? In this competitive field of coding Agents, how do you view their development direction and competitiveness?
Boris Cherny: Honestly, I haven't used it deeply; I remember trying it briefly when it first released. In my view, it's very similar to Claude Code, which is actually quite reassuring. I think competition is good; it gives users choices and forces all of us to do better.
But I must admit that our team focuses almost exclusively on solving user problems, rarely spending time paying attention to competitors, and rarely trying other products. Knowing they exist is certainly important, but I prefer to communicate directly with users and polish the product based on feedback. Ultimately, the key is building a truly great product.
Life Planning After AGI
Host: One last question. When I interviewed Anthropic co-founder Ben Mann, he asked me to ask you a question: What are your plans after AGI is achieved? When you achieve AGI, what will your life be like?
Boris Cherny: Before joining Anthropic, I lived in the Japanese countryside, living a completely different life. I was the only engineer in the town, and the only English speaker.
I rode my bike through rice paddies to the farmers' market two or three times a week. The pace there was completely opposite to San Francisco's, a completely different rate of life.
One way we integrated into the local community was by exchanging pickles. Almost every household in the town made miso and pickles. I also honed my miso-making skills, and I still enjoy it to this day.
Miso is an interesting food; it teaches you to think with a long-term perspective. This is completely different from engineering. A pot of white miso takes at least three months, while red miso takes two or three, or even four or five years. You mix the ingredients, and then it's a long wait; you must be very patient. I like miso precisely because it got me used to this long-cycle way of thinking. So if AGI is achieved, or I'm no longer at Anthropic, I'll probably go make miso full-time.
Lightning Round and Closing
Host: I love that answer. Ben specifically asked me to ask about miso, and I'm glad you mentioned it too. The future might be diving deep into the art of miso-making. Boris, this interview was wonderful. It feels like we're kindred spirits from Ukraine. Before entering the exciting lightning round, is there anything else you'd like to share or emphasize with listeners?
Boris Cherny: I'd like to emphasize that for Anthropic, from coding to tool use, to computer operation, this is a continuous line. This is our path of thinking, the inevitable direction we believe models will develop, and the future we want to build. At the same time, this is also our way of researching and maximizing safety.
Today, a massive multi-billion dollar industry has formed around "code." My friends use it, and discussions about it are endless online. This is becoming increasingly important.
From some perspectives, this is completely unexpected because we didn't know initially how the product would evolve, nor did we expect it to start from a terminal form. But from another perspective, it's also reasonable because this has always been our company's belief.
However, everything feels like it's just beginning. Most people in the world haven't started using code tools yet, and most people haven't used artificial intelligence. It feels like we've only completed 1%, and there's still a long way to go.
Host: Yes, thinking about these numbers is crazy. You just raised huge funding. As far as I know, Claude Code alone has reached $2 billion in revenue, and Anthropic's total revenue has reached $15 billion (note: the original figure here may be a slip of the tongue or refer to valuation). Thinking that all this is just beginning is incredible.
Boris Cherny: Right, this is crazy. Honestly, the reason Claude Code continues to grow is entirely because of users. Everyone is enthusiastic, fell in love with this product, and actively tells us what needs improvement. The only reason it keeps getting better is because everyone is using it, talking about it, and giving feedback. This is the most important thing. For me, communicating with users and improving the product for them is my favorite lifestyle.
Host: And making miso.
Boris Cherny: Right, and making miso. Actually, making miso isn't complicated; you just need to learn to wait.
Host: That's wonderful. Okay Boris, now let's enter the exciting lightning round. I have five questions. Are you ready? First question: What are two or three books you recommend most to others?
Boris Cherny: I'm a bookworm. The first is a technical book, Functional Programming in Scala. This is the best technical book I've ever read. You might not use Scala, and I'm not sure about its future status, but functional programming and type thinking have an elegant beauty. This is also how I write and think about code. You can treat it as historical literature or as reading to improve skills.
Host: I like this recommendation, a book that's never been mentioned, that's great.
Boris Cherny: Haha, great. The second is Charles Stross's science fiction novel Accelerando. This is my favorite science fiction work. This book has an extremely fast pace, with a sense of speed building layer by layer. I think it captures the essence of our current era better than any other book—and that's "speed." The story starts with an imminent lift-off, approaches the singularity, and ends with a collective lobster consciousness in Jupiter orbit. And all this happens within a few decades, that rhythm is incredible.
The third I recommend is Liu Cixin's The Wandering Earth. Everyone probably knows his Three-Body Problem, although Three-Body Problem is great, I like his short stories more. The Wandering Earth collection has some very wonderful stories. Reading Chinese science fiction is interesting because it shows a completely different perspective from Western science fiction, at least from his way of thinking as a writer. The writing is also very beautiful.
Host: Science fiction can indeed guide us to think about the direction of the future. It's like constructing a grand blueprint, making us feel "I understand, I've read about such a world in books."
Boris Cherny: Indeed. In fact, this is exactly why I joined Anthropic. Like I said, I lived in the countryside, where everything was slow, especially compared to science fiction. Everything you did there revolved around the seasons, around food that took months or even years to mature. Social activities were usually organized this way, and people's time was as well. For example, when you went to the farmers' market, you'd realize it's "persimmon season." You knew this because there were about 20 stalls all selling persimmons. By next week, that season would end and become "grape season," and you could intuitively see this change. It's like a long-term time scale.
Meanwhile, I read a lot of science fiction. It was during that period that I started thinking about these grand time scales. I saw the possibilities of how things could develop and felt I had to contribute to making the future better. This is actually why I ultimately joined Anthropic. Ben Mann played a big part in this too.
Host: I'd really love to dedicate a whole podcast to discussing your time in Japan and Boris's journey from Japan to joining Anthropic, but today we'll keep it brief. If you haven't read it, I'll quickly recommend a science fiction novel—have you read A Fire Upon the Deep?
Boris Cherny: Oh, that's Vinge's work, right? Yes, I've read it.
Host: Right, that book is amazing. From the perspective of artificial intelligence and Artificial General Intelligence (AGI), it's very interesting. Too bad not many people have read it, so I especially promote it. I often think about what's in it.
Boris Cherny: Yes, that's right. I also like The Expanse.
Host: I think those projects are indeed...Boris Cherny: Right, yes, I think so.
Host: Yes, although it's long, the setting is complex and a bit hard to get into, it's brilliant. Okay, let's continue with the "lightning round" questions. Any movie or TV show you've particularly liked recently?
Boris Cherny: Actually, I don't watch much TV or movies; I really don't have time now. But I did watch Netflix's Three-Body Problem series and really liked it. I think it's a great adaptation of the original novel series.
Host: Leaders in the AI field seem to share a common trait: they don't have time to watch TV or movies; I completely understand. Have you discovered any product you particularly like recently?
Boris Cherny: I want to be a bit "humble-brag" and recommend Claude. This is indeed a product that changed my life because it's always running. Especially its Chrome integration feature, which is really excellent. It helped me handle a traffic ticket and also cancel several subscriptions. It can help you solve so many tedious tasks, it's wonderful.
Although I'm not sure if this counts as a product, I also want to recommend a podcast I really like. Besides Lenny (Lenny's Podcast), there's Ben and David's Acquired podcast. It's really, really, very good. Their way of diving deep into business history and bringing it to life is breathtaking. If you haven't heard it, I suggest starting with the Nintendo episode.
Host: Excellent suggestion. About Claude, to make everyone understand—if you haven't tried it—basically you input what you want to do, and it can launch the Chrome browser and complete the task for you. I saw someone at Anthropic on parental leave have it help fill out tedious medical forms. Facing those very annoying PDF files, it automatically loads the browser, logs in, and completes the filling.
Boris Cherny: Exactly, correct. And it really works now. A year ago when we were experimenting, it didn't work well because the model wasn't ready. But now it's very mature. I think many people don't really understand what it is because they haven't used Agents before. It feels very much to me like Claude Code a year ago, but as I said, its evolution speed is much faster than early Claude Code. So, I think it's starting to show its potential.
Host: And the Chrome extension you mentioned, you can use separately; it resides in Chrome. You can just talk to it directly, let Claude look at your screen, your browser, and have it do things, like tell you what you're browsing, or summarize current content, etc.
Boris Cherny: Yes, correct. For people just starting to use Claude, I recommend this: download the Claude desktop app. Then go to the Claude tab; it's right next to the Code tab. My recommended entry method is to have it use a tool, like "clean desktop" or "summarize emails." Or you can have it "reply to the first three emails I received," and it can even help me draft replies now.
The second thing is to connect tools. If you connect relevant tools—for example, you say "look at my important emails"—it can send Slack messages, or fill content into spreadsheets, etc. For example, I use it to manage all my projects. We have a unified spreadsheet for the whole team, with one row per engineer. Every week, everyone fills out a status report. Every Monday, Claude checks the spreadsheet and sends reminder messages on Slack to engineers who haven't filled their status. So I don't have to do this myself; I just need one prompt, and it handles everything.
The third thing is to run a series of Claude tasks in parallel. You can have it handle any number of tasks simultaneously. For example, start a project management task, then have it do something else, and something else. After starting these tasks, I go get a cup of coffee and let it run on its own.
Host: I'll attach a link to a post where many people share various ways they use what used to be called Claude Code (now directly achievable through code). Because often the reaction is "wow, I didn't know you could use it like that." Once you see these examples, you'll marvel: "Wow, I didn't know I could use it for this."
Boris Cherny: Yes, I think a lot of inspiration actually came from you, Ani. You once posted a post about "50 non-technical use cases for Claude" or something like that. Actually, one of our product managers used that post to evaluate Claude before releasing the product. When they found that Claude could complete 48 of those 50 use cases, they said, "Okay, this is good enough."
Host: Wow, I didn't know that post had that effect. I actually became an evaluator? That feels amazing. I think I contributed value to the future of AI; this is like a reverse breakthrough.
Wow, that's so cool. Okay, I'm really curious about the two use cases that didn't get completed. Anyway, two more questions. Do you have a favorite motto that you often use to guide your work and life?
Boris Cherny: Use common sense.
I think many failures seen in work environments are often because people fail to use common sense. For example, mechanically following some process without thinking, or doing things without using their brain. Or they're working hard on a bad product or bad idea, just going with the flow.
I think the best results often come from people who can think from first principles and build their own "common sense system." For example, if something sounds wrong, it's probably not a good idea. This is the advice I give colleagues most often.
Host: I think this itself could be a podcast topic—what is common sense? How to cultivate common sense? But let's end it here for today. The last question: You've been much more active on Twitter/X recently. I'm curious why? What's your experience in the Twitter world? Because you have a lot of interaction there.
Boris Cherny: For a long time I only used Threads because I also participated in Threads' development. And I really like its design; it's a very clean product.
I started using Twitter because I felt bored. Last December, my wife and I traveled around various places in Europe. We went to Copenhagen, visited several different countries. For me, this was like a "coding vacation." Every day I wrote code; this is my favorite way to vacation—writing code all day feels great.
Then, suddenly I felt a bit bored and ran out of ideas within a few hours. I thought, okay, what's next? So I opened Twitter and saw people discussing Claude Code, so I started replying. I thought maybe what I should do is look for bugs people are encountering, or collect everyone's feedback.
So I introduced myself, asked people how they were doing, and collected a lot of bugs and feedback. I think they were surprised that we could handle feedback so quickly now. But for me, this is very normal. If someone encounters a bug, I can solve it within a few minutes because I'm familiar with Claude Code; as long as it's described clearly, it can execute. After solving one, I can go do something else and handle the next issue. But I think many people are very surprised by this. So this experience is really great.
Yes, the experience on Twitter is very good. Interacting with people, understanding everyone's needs, listening to feedback about bugs and features—it's all wonderful.
Host: I saw on Twitter a few days ago that Nikita Bier complained that you posted many very long posts, causing the page to crash, and everyone was wondering "oh my god, what's happening?"
Boris Cherny: Yes, right, that was indeed a bug. Hope it's fixed now.
Host: That's great. Oh my god, Boris, I could chat with you for hours. I won't keep you any longer. Thank you so much for joining us. You're amazing. Where can people find you? How can listeners help you?
Boris Cherny: Just find me on Threads or Twitter; that's the most convenient way to contact me. Feel free to @ me anytime, whether it's feedback on bugs or submitting feature requests. Tell us what's missing from the product? What else can be done to improve? What do you truly want? I look forward to hearing everyone's voices.
Host: Boris, thank you so much for being on our show.
Boris Cherny: Thanks for having me.
Host: Goodbye, everyone. Thank you all for listening.
Author: Guage News
Reference:
Related Reading:
Anthropic Launches Claude Interactive System Tools
Microsoft Azure CTO Mark Russinovich: "Saving" Beginner Developers
Trump Orders Full Ban on Anthropic; US Government vs. AI Giants' Defense Technology Game Intensifies