The inevitable trajectory of software is death.
Compiled by Wang Qilong | Produced by AI Tech Camp (ID: rgznai100)
The earliest MVP of Kubernetes was likely written in less than a week.
A few people, a handful of machines, and a very rough demo: it could distribute containers, perform basic load balancing, automatically restart crashed processes, and switch from v1 to v2 during upgrades. Looking back today, such an opening seems almost meager. It is hard to connect this with what later became Kubernetes, a project that rewrote the landscape of cloud-native computing and almost reshaped the entire vocabulary of the cloud.
But the most valuable part of this history to revisit is not how Kubernetes eventually became the de facto standard, but why it had to be built in the first place, and why it had to be open-sourced.
Reading Brendan Burns' (Co-founder of Kubernetes, co-founder of Heptio, and currently Technical Fellow/CVP at Microsoft Azure) latest interview, the most fascinating part isn't him retelling the success story of Kubernetes. Instead, he pulls many events that people assume are already written in history back to that moment before the outcome was known:
- Kubernetes started as a demo cobbled together in a few days, but even then, Brendan Burns realized: something like this could not belong to just one cloud vendor forever.
- Google open-sourced Kubernetes not out of idealism, but out of the most realistic judgment: if you don't do it, someone else will, and you will lose the chance to define it.
- Kubernetes later unified the entire cloud-native ecosystem, but in Brendan's view, what is most worth looking back on is not its rise, but the fact that one day it will also exit the stage.
- Truly mature infrastructure often doesn't die suddenly; rather, like Linux, it stays alive but is discussed less and less as a standalone topic.
- In the AI era, the most likely scenario is not Kubernetes being defeated head-on, but being buried deeper, becoming the default, invisible foundation of the system.
Below is the full translation of this conversation.
Why Did Google Have to Build Kubernetes?
Q: If you had to explain to Google management back then why you should build Kubernetes for the entire industry, what would you have said?
Brendan Burns: Actually, the hardest part in the early days wasn't building the project; it was explaining it clearly.
In our own minds, it was clear, but making it convincing enough for others to agree was not easy. We approached it from several angles.
A very important background was the lesson of MapReduce. At that time, Hadoop and the so-called "Big Data Revolution" were very hot. Google wrote the original MapReduce white paper, but what was widely adopted by the industry was Hadoop, the open-source implementation. Google proposed the initial idea, but the ecosystem did not revolve around Google. Others read the paper, built a similar but not identical system, and in the end, what actually ran and was used at scale was not Google's version.
So, a core judgment at the time was: If Google just kept publishing white papers without turning the idea into a system that others could actually run, deploy, and use, it would be impossible to truly sit in the driver's seat of technological evolution.
Going further, why containers, and not continuing to focus on virtual machines? Our judgment was that as software becomes increasingly critical infrastructure, everyone will need an "autopilot"-style system to manage applications. Inside Google, we were already very clear: to run complex software stably, you can't rely solely on humans watching over it; you must have a system to handle deployment, scheduling, and recovery. This need is not optional; it is something that inevitably arises when software complexity reaches a certain stage.
The most interesting part was the final question: Why must it be open source?
Many would say, "You've convinced me this is worth doing, so why not make it an exclusive capability for Google Cloud? Wouldn't that have more commercial value?"
But our judgment was exactly the opposite. Because if you make it something that only works on your own platform, you actually won't win. There are many users in this world not on your platform; they are on other clouds, or perhaps in on-premise data centers. If you block all these people out, they won't wait for you; they will just build a substitute themselves.
The reason open-source ecosystems often win is that they can run in more places. Why did Linux win? A very important point is that it can go anywhere. For Google Cloud, if you aren't already market number one, you definitely cannot make this kind of thing a closed weapon. You have to let everyone use it, and then ensure it works best on your platform. This way, even if others aren't on your platform, they will still listen to how you define the problem and how you frame the direction.
To put it more directly: This world will inevitably have an open-source version. The only question is whether you build that open-source version, or someone else does.
Q: So commercially, was Google trying to change the competitive landscape of cloud computing with Kubernetes?
Brendan Burns: Yes, that was a very important part of it.
If someone else had already made virtual machines the mainstream, and the story you could tell was just "We have something similar, just slightly better in some places," that would be very difficult. You would always be living in someone else's narrative, always chasing their taillights.
But if you define a new battlefield, the situation is completely different. You are no longer just chasing others; you start organizing the problem and the language. Even if others don't end up running directly on your platform, they will focus their attention on you first, because you were the one who spoke about and built this direction first.
This kind of "discourse power" is hard to quantify, but it is very important. It changes who defines the future and who leads the narrative in the entire market.
Of course, looking back today, Google Cloud did not instantly become number one because of Kubernetes, so you can't tell this as a simple commercial victory story. But Kubernetes did place Google in the most central position of discourse in the cloud-native era, and I think there is no controversy about that.
Q: What did that earliest demo actually look like?
Brendan Burns: It was actually very simple.
It was basically a minimal control interface that could deploy containers, distribute them across a few machines, and perform some very basic load balancing. For example, if you accessed the same entry point, it might tell you "I am replica 1," and if you refreshed, it might become "I am replica 3," showing you that it had indeed been replicated and distributed across different machines.
There was also very basic health checking. If you killed the process, it could pull it back up. Plus, a very初级 version upgrade demo that could switch from v1 to v2. That was about it.
Looking back today, this is certainly primitive and far from a complete product. But it was enough to convince people. Because it wasn't a slide in a PowerPoint or a concept document; it was something that actually ran.
A Demo Made in Less Than a Week That Later Rewrote Cloud Computing
Q: How long did it take to build that initial MVP?
Brendan Burns: Less than a week, maybe four or five days.
Of course, that was truly a version where "everything that could be saved was saved." We took every shortcut possible. Many underlying capabilities weren't built from scratch; instead, we leveraged existing open-source projects as much as possible, took what we could get, and used some glue code to stick them together, just to give the system its basic shape.
One thing I've always been relatively good at is figuring out how to integrate existing open-source components into a new system. This ability was particularly critical in the earliest stages of Kubernetes. Because the value of an early prototype is never elegance, but rather getting others to see as quickly as possible that the idea is viable and runnable.
Q: Didn't you have your own formal job at the time? How did you find the time for a project like this?
Brendan Burns: I wouldn't say I completely abandoned my original job, but on such a short time scale, it is possible to carve out space.
I've always had a somewhat "dangerous" piece of advice: I believe most people can "hide" about 10% of their energy from their work without management really noticing.
I don't mean people should be lazy. What I mean is that in most organizations, you can actually retain a little bit of freedom to do things no one explicitly asked you to do but that you feel are important. Many truly influential ideas grow out of this kind of space.
Of course, there is a prerequisite behind this: You have to accept failure.
When you do this kind of side project, it often won't succeed. You might invest time, and nothing happens in the end; you might also miss out on work that is easier to see and evaluate because you bet your energy on something uncertain. You have to accept this risk, accept the logic of "try five times, succeed once, but that one success might far outweigh the previous four."
Some people are suited for this, some are not; that's all normal.
There is also a more direct and realistic fact: many people say they don't have time for these extra projects, but they probably still spend ten-plus hours a week playing games, scrolling YouTube, or watching Netflix. So in the end, it's not a question of whether you have time, but whether you are willing to give up something for what you truly believe in.
I'm not the type to advocate staying up late working every day, but if you really feel something is important, it sometimes means that for a period, you will watch fewer videos and do less other entertainment.
Q: So your method isn't to write documents to seek permission first, but to build the thing first?
Brendan Burns: Yes, pretty much.
Many things are hard to explain clearly with documents. You can write a strategy memo or make a PowerPoint, but the truly effective way is often to produce something that runs first. Once it starts running, the nature of the discussion changes.
If you haven't done anything yet, the question management faces is: Should we bet time and resources on this? But if you've already built it, even if it's rough, the question becomes: Is this thing worth pushing forward? Is it worth releasing?
These two discussions are completely different things.
The former discusses resource allocation; the latter discusses whether the idea itself holds water. For many new projects, switching from the first type of question to the second is itself a decisive progression.
Of course, there is still risk of failure. You might spend time building it, and in the end, no one buys in. But if you are going to do this, you have to swallow that fact too. You can't accept only the possibility of success without accepting the cost of failure.
Brendan Burns' Engineering Methodology Is Actually More Valuable Than Kubernetes Itself
Q: From that prototype to something others could actually try out, how long was the gap?
Brendan Burns: About half a year.
From a very hacky prototype to a system we felt others could really try out, there was still a lot of foundational work to fill in. Many details that seemed insignificant had to be solidified one by one.
But that stage was also very precious. Because you were essentially rebuilding a system in a "clean room." Many people who joined later had built similar systems elsewhere and already had a bunch of ideas in their heads about "if I were to do this again, how would I design it?"
But in the real world, few engineers really get this kind of opportunity.
More often, you take over a system that is already online, has a large user base, and carries legacy baggage. Every day you are fixing bugs, 兼容 (compatibilizing) historical issues, and patching old designs. Starting from scratch and rebuilding a system with relatively little historical debt is a very rare moment in an engineering career. Kubernetes happened to have such a window in its early days.
Q: Today, many engineers hearing this story might think: This is Google, after all; where would I have such space in reality?
Brendan Burns: Organizational environments will certainly differ, but there is also an element of personal choice here.
Many people interpret "no space" as an absolute condition, but reality is more like: the space is small, the risk is high, and no one will guarantee that what you do is worth it. You have to judge for yourself, bet on yourself, and bear the consequences if it doesn't work out.
And from a career development perspective, this ability becomes even more important as you go higher. Because at higher levels, rarely will anyone package, define, and hand you a project that is sufficient to complete your leap. More often, truly important projects are ones you see yourself, distill yourself, and push out yourself.
From this perspective, so-called side projects are not just "hobbies"; they are actually training a more proactive engineering perspective and professional capability.
K8s Will Also Die, Just Not in the Way You Might Think
Q: Kubernetes has become the de facto standard today; will it also have its own upper limit?
Brendan Burns: Of course it will, but the key is how you understand this "limit."
Many components of Kubernetes can essentially scale horizontally. If requests increase, you can add API Servers; if scheduling pressure grows, you can add schedulers. Many parts can be solved by scaling out.
What is truly harder is the underlying storage layer. Because in the Kubernetes architecture, a large amount of state eventually returns there, and this is often the layer that is not so easy to scale infinitely. Today everyone is familiar with etcd. If the scale continues to push up by an order of magnitude in the future, we have to answer again: Can it still withstand it? Or will we need a solution that retains the same core characteristics but has stronger scaling capabilities to replace it?
I don't think there is a naturally hard-coded ceiling in Kubernetes' design that prevents it from scaling further. But once a system crosses an order of magnitude, the problems migrate. The bottleneck that was most obvious before may suddenly become unimportant, and new bottlenecks will float up elsewhere. You were previously constrained by CPU, later it might become network; previously constrained by memory, later it might become storage. With every order of magnitude crossed, the real problem moves.
Q: You once said the fate of software is death. Will Kubernetes die too?
Brendan Burns: I completely agree with that statement.
However, a more complete way to put it is: You'd better not fall in love with your software, because the inevitable trajectory of software is death. What I really want to express is that you shouldn't be reluctant to let go just because "I wrote this." When it is time to be replaced, you should be willing to throw it away.
Looking at industry history, this is almost a universal law. Even within Kubernetes, much of the code I wrote back then has already been rewritten many times.
As for how Kubernetes will "die," it won't necessarily be suddenly replaced entirely by another new system. Sometimes, a system doesn't really disappear; instead, it becomes lower and lower level, more and more invisible. Just like Linux is still here today, but most people don't discuss Linux every day; processor architectures are still here, but not every developer is staring at them all the time.
Kubernetes might also move towards this state: it is still there, just covered by upper-layer systems, wrapped by new interaction methods, and finally, people perceive it less and less directly.
Especially in the AI era, where much attention has shifted to models, inference frameworks, and application interfaces, Kubernetes may slowly recede to the bottom layer, becoming that capability that exists by default but is no longer the protagonist.
Of course, if we stretch time long enough, say 100 years, I would be very surprised if Kubernetes still exists intact. The tech world changes too fast; many things that look solid today may start to loosen in a few years. The biggest problem in the future will never be that you didn't predict change, but that change comes faster than you imagined, or not in the direction you thought.
The Key Is to "Take Good Notes"
Q: How do you view getting a PhD? Many people struggle with whether it's worth it.
Brendan Burns: This is one of the questions I get asked most often.
If we look only at career ROI, I can tell a very realistic story. I later met an undergraduate classmate in the same company. We graduated in the same year; he went into startups and industry, while I went to get a PhD and later returned to industry. In the end, our levels in the company were the same.
So if you ask me if getting a PhD will make you significantly ahead in career development, my answer is roughly: not necessarily. Often, the difference isn't that big.
But if you ask the reverse, whether the experience was worth it, I would say: for me, yes, because I had a lot of fun and learned many very useful skills.
For example, writing and expression. The training from my PhD and my advisor taught me how to write and explain complex ideas clearly. Later, during the early days of Kubernetes, we spent a long time promoting, persuading, and争取 (striving for) internal support; this ability was actually very important.
Additionally, I later worked as a professor for a few years. Teaching classes to people who knew nothing about computers forces you to think: How exactly should I explain a complex system so others can understand? It was the same in the early days of Kubernetes; many people would ask: What is a container? What is orchestration? Why do I need it? These are not problems that get solved automatically just because you wrote the code; you must be able to teach, you must be able to explain.
So if you ask me, I would say: A PhD might not make you leap ahead in title faster, but it might give you some long-term, very valuable capabilities.
Q: What is another question young engineers ask you most often?
Brendan Burns: Another very common question is: What exactly should I learn?
For example, AI is very hot right now, but some people prefer systems, so they come to ask me: Should I give up systems to learn AI? My answer is usually that I don't care so much about what specific thing you learn; I care more about whether you are continuously learning.
If you have no passion for AI at all and force yourself to learn it just because it's hot, you probably won't learn it well. Conversely, if you really like systems, you will invest more time and attention into it, and this continuous investment is more likely to translate into real capability. The industry will also never not need excellent systems engineers.
I can feel that many young people are particularly afraid of "choosing the wrong direction." But honestly, I have never had a rigorous life plan myself. I have always just chased whatever seemed interesting and valuable.
Of course, this approach also has risks, but I also want to remind everyone: many experiences that you later feel were detours may end up becoming the most important nutrients. As long as you keep learning, you usually won't do too badly.
Q: Is there any book that had a particularly big impact on your career?
Brendan Burns: If we are talking about the engineer stage, a book that had a huge impact on me is "Design Patterns: Elements of Reusable Object-Oriented Software," commonly known as the GoF "Design Patterns."
Later, as my role changed and I started leading larger teams, I would recommend two other books: one is "Leadership on the Line," and the other is "The Five Dysfunctions of a Team."
The former is more about leadership, the latter more about team collaboration. Roughly speaking: if you are currently mainly an engineer, read "Design Patterns" first; if you have started doing management or organizational leadership, the latter two will be more helpful.
Q: If you could go back to when you just graduated college, what advice would you give your younger self?
Brendan Burns: Take better notes.
I always feel that the entire journey of Kubernetes is completely worth writing into a book, or even a great business case study. But the problem is, I didn't leave sufficiently complete records back then.
The code is still there, but what easily disappears is often not the code, but the discussions at critical moments, the tug-of-war between partners, the internal organizational push, and the judgments and games between people. I still remember some of these now, but I can't recall them all.
If I could have left more complete notes back then, looking back today, it would definitely be very valuable.
Original Video Link: https://youtu.be/FKijpCEH9D8
(Submission or media inquiries: zhanghy@csdn.net)
Recommended Reading: