Compiled by Yuqi, Tina
Every few decades, software engineering is declared "dead" or "about to be completely replaced by automation." We've actually heard similar claims many times before. But what if this time isn't the end, but rather the beginning of another form of software engineering's "golden age"? Just as has happened multiple times throughout history.
Recently, Grady Booch, one of the founding figures in software engineering, shared his detailed understanding of the three "golden ages" in computing since the 1940s in a podcast with host Gergely Orosz, and how these eras emerged under their respective technical constraints.
This conversation has drawn widespread attention for another direct reason. In the program, Booch directly evaluated the recent controversial claim by Anthropic CEO Dario Amodei that "software engineering will be automated within 12 months." His conclusion was clear: to use a technical term, this judgment is fundamentally wrong.
In Booch's view, such statements conflate "writing code" with "software engineering" itself, and ignore a fact that has repeatedly appeared throughout history: tools change time and again, but the truly difficult problems that software engineering needs to solve have never disappeared. On the contrary, every "automation panic" ultimately corresponds to the arrival of a higher level of abstraction, and the opening of a new golden age.
Based on this podcast video, InfoQ has made partial edits.
Core viewpoints are as follows:
Many achievements of modern computing were actually woven on a "loom of sorrow."
Software is an extremely dynamic, fluid, and highly malleable field. Once we master the methods for building certain types of systems and form reusable patterns, we quickly discover that they have new economic applications.
Many so-called "career crisis narratives" actually stem from a narrow understanding of the industry. What's really happening is: software is expanding to a broader population. Non-professional developers will write more software, and this is an extremely positive change.
When you stand on the threshold leading to something new, you can choose to gaze into the abyss, fearing to fall; or you can choose to take the leap and spread your wings. Now is the time to fly.
The First Golden Age: A Software World Supported by Algorithmic Abstraction
Gergely: You've mentioned multiple times that the overall evolution of software engineering is essentially a process of continuously elevating abstraction levels. Can you outline several key turning points to help us understand this trajectory, and further explain the role AI plays in it?
Grady: The term "software engineering" itself actually appeared quite late. Margaret Hamilton is generally considered the first to explicitly use this term. She had just left the "Manned Orbiting Laboratory" project and joined the Apollo program. In a team composed almost entirely of hardware and structural engineers who were all men, she was one of the few software developers. She wanted a word to distinguish her work, so she began using the title "software engineer."
Later, others adopted this usage as well, particularly the NATO software engineering conference that people often mention. In fact, that conference was held several years after Margaret's work, and the naming of "software engineering" itself was somewhat controversial, just as "artificial intelligence" sparked debate at its first academic conference. Nevertheless, over time, the term gradually became accepted and established.
Its core meaning lies in the fact that Margaret and her contemporaries realized this was an activity with engineering properties. The essence of engineering is to build the most reasonable solution possible among various static and dynamic constraints, rather than pursuing perfection. This is consistent with structural engineering, electrical engineering, or chemical engineering.
In the software field, the medium we face is extremely flexible, malleable, and elastic, but constraints are equally real. We are still subject to physical laws, such as the impossibility of information transmission exceeding the speed of light; the scale of the systems we build is limited by hardware capabilities; at the algorithmic level, there are also fundamental boundaries. We may theoretically know how to solve certain problems, such as the Viterbi algorithm crucial for cellular communications, but for a long time, we didn't know how to effectively implement it. The same applies to the Fast Fourier Transform: the theory existed long ago, but practical applications couldn't advance until it became computable.
Besides scientific and computational limitations, human factors are equally important. Can I gather enough people to complete the work? Can I effectively organize the team? Ideally, the optimal team size for software development is zero, followed by one person, but this is obviously not realistic. Because these systems have long-term and profound impacts at both economic and social levels, we cannot rely on individuals, but must make the software itself capable of transcending individual lifecycles.
As software gradually penetrates into the crevices of social structures, legal issues arise, such as digital rights management. But more fundamental are ethical issues: we may know how to build certain systems, but should we build them? Does this align with human values and dignity?
It is these scientific, technological, human, and ethical static and dynamic forces that together act upon software engineers. What engineers do is to strike a balance among multiple constraints and build systems in an incredibly wonderful medium.
The development of software engineering can be divided into different eras. In the earliest days, the concept of "software" barely existed. People simply operated machines directly, with no clear boundary between hardware and software. Take ENIAC as an example: does plugging and unplugging wires on a patchboard count as programming? It could count, but that's not what we understand as software today.
It wasn't until the late 1940s and early 1950s, with the evolution of computer forms, that people gradually distinguished software as an independent entity. At that time, software was almost entirely customized and tightly bound to specific machines. But the cost of software itself began to become significant. People wanted hardware to continuously upgrade, but didn't want to abandon existing software investments, which led to the critical issue of "hardware-software decoupling." Notably, the term "digitization" wasn't proposed until the late 1940s, and the word "software" didn't appear until the 1950s. The recognition of software as an independent entity has actually happened within my lifetime, which itself is astonishing.
As software gradually decoupled from hardware, people like Grace Hopper began to realize that software could be not just a technical activity, but also an industry and institution. The earliest software mainly existed in assembly language form, highly coupled with specific machines. By the 1960s, IBM realized it could build a complete computer architecture with a unified instruction set, thereby preserving software assets while upgrading hardware. This decision was both an engineering choice and a commercial and economic one. Once this approach was established, software demand rapidly exploded, and software engineering thus entered its first golden age.
During this period, software became an independent industry, with its core challenge being "complexity." Although by today's standards the systems of that era were relatively simple, at that time, they were already extremely difficult to understand and build. Since software was still very close to machines, the primary form of abstraction was algorithmic abstraction. Computers were initially used for mathematical calculations, so languages like Fortran were essentially designed to implement formula translation.
Gergely: Looking at the timeline, which period does this generation roughly correspond to?
Grady: Approximately from the late 1940s to the late 1970s. Representative figures of this period include Ed Yourdon, Tom DeMarco, and Larry Constantine. Entity-relationship models and other ideas also emerged during this time and influenced the data field.
This was an extremely active period. Flowcharts were invented to assist in system design; software development formed a division of labor, with system analysts, programmers, keypunch operators, and computer operators. This division was mainly driven by economic factors. At that time, machine costs far exceeded human costs, so everything revolved around how to maximize the use of scarce and expensive computing resources.
The main tasks of this stage focused on mathematical calculations and automation of existing business processes. For example, accounting and payroll work originally required a lot of manual labor. Through software, not only could processes be accelerated, but accuracy could also be improved. Therefore, the vast majority of software at that time consisted of commercial, numerical, and computation-intensive systems.
Outside the public eye, defense, aviation, weather, and medical fields were also driving key innovations. Truly cutting-edge exploration often occurred in these edge areas, especially in defense systems. Against the Cold War backdrop, distributed and real-time systems became essential needs. Projects like the Whirlwind computer and SAGE (Semi-Automatic Ground Environment) system emerged one after another. The SAGE system was so massive that it was estimated to have occupied 20% to 30% of the software engineering resources in the United States at the time. This was an unprecedented engineering scale and marked the profound influence bred in the margins of the first generation of software engineering's golden age.
Gergely: The military was the largest funder of software research and industry advancement at that time, right? Because they had these practical needs.
Grady: Indeed. Because there were clear and real threats at the time, the military had to continuously invest. Therefore, many key innovations occurred within the defense system. In a computer history documentary I'm working on, I used a phrase: there are two most important driving forces in computer development history: one is commerce, and we've discussed its economic logic; the other is war.
Many achievements of modern computing were actually woven on a "loom of sorrow." Many technologies we take for granted today, such as the internet and miniaturization technology, almost all originated from government funding, especially in the context of the Cold War. So, in a sense, we indeed "benefited from" the Cold War.
Why Did the First Golden Age Reach Its Limit?
Gergely: So does this stage still belong to the first generation of software engineering's golden age? Or has it already crossed over?
Grady: These things still occurred within the first golden age. The software engineering of that time had a relatively clear "center of gravity," but in the margins, there was also a lot of exploration pushing the industry forward. In the first golden age of software engineering, the main applications of software focused on mathematical calculations and business systems. The core method of system decomposition was algorithmic abstraction. We understood the world more from the perspective of processes and functions, rather than from data or objects. But in edge areas, some application scenarios were constantly breaking through this paradigm, such as the need for distributed systems, multi-machine collaboration, real-time systems, and human-computer interaction interfaces.
The graphical user interfaces we use today can be traced back to the Whirlwind and SAGE systems. The earliest graphical interfaces at that time were based on cathode ray tubes (CRT). These explorations were not at the center of software development at the time, but later had profound effects. An important insight here is: software is an extremely dynamic, fluid, and highly malleable field. Once we master the methods for building certain types of systems and form reusable patterns, we quickly discover that they have new economic applications. This is precisely the characteristic of the first golden age of software engineering.
However, by the late 1970s and early 1980s, this system began to show cracks. The software engineering conference held by NATO was the first to clearly point out the problem at a public level: NATO realized it faced a serious software dilemma: the demand for software was almost endless, but the industry couldn't deliver effectively in terms of quality and speed. This was the background of what later became known as the "software crisis."
Gergely: What exactly did this "software crisis" refer to? What were people worried about at the time?
Grady: Software had been proven to have enormous value and clear economic incentives, but the entire industry couldn't produce software at sufficient scale with sufficient quality and speed.
Gergely: In other words, software was expensive, slow, and inconsistent in quality?
Grady: Add one more thing: demand was extremely strong. People kept saying, "We need more software." These four factors combined to constitute the crisis of that time. This is different from the privacy monitoring and system crash issues we worry about today. The nature of the "crisis" faced by each golden age changes.
Gergely: Looking back at that era from today, it's really hard to imagine the situation at the time.
Grady: At the time, this crisis was real and urgent, and it was also an exciting era. Software, as a highly flexible and scalable medium, was limited almost only by our imagination.
Combined with the breakthrough in miniaturization technology: the emergence of integrated circuits, the birth of Fairchild, and the formation of Silicon Valley—all of this originated from transistor technology. Fairchild's largest customer was actually the U.S. Air Force, for missile projects. Most of the transistors produced in early Silicon Valley were used for Cold War-related programs. But it was precisely these demands that established the economic foundation for mass production, which subsequently gave rise to integrated circuits, personal computers, and a series of subsequent developments.
By the late 1970s, the software crisis was already very obvious. Take the U.S. government as an example: they realized they were陷入 a "Tower of Babel" problem: there were as many as fourteen thousand programming languages used in military systems. Even by today's standards, this number is extremely astonishing, not to mention that the scale of software systems at the time was far smaller than now. Languages like JOVIAL and COBOL were widely used, while languages like ALGOL promoted the development of formal methods. Under the influence of Dijkstra, Hoare, and others, people began introducing mathematical rigor into programming language research, and formal language theory thus emerged.
Against this backdrop, the U.S. government promoted the Ada project, initially initiated by a joint project working group, with the goal of reducing the number of languages and attempting to replace them with a unified language. This brought together a large amount of research results, such as abstract data types, information hiding, separation of concerns, and Knuth's literate programming ideas. Ada was an attempt to integrate these concepts at a macro level. Only the U.S. military at the time had the conditions to complete this scale of effort.
At the same time, Bell Labs gave birth to the C language and Unix, which were also extremely critical achievements. And at the margins of academia and industry, a researcher named Bjarne Stroustrup began thinking: could the object-oriented concepts from Simula be introduced into the C language to solve its inherent deficiencies? It's worth mentioning that Simula was the earliest object-oriented language. All of this reflected a deeper change: people began to realize that algorithmic abstraction alone was insufficient to handle complexity; software needed a new abstraction method: object abstraction.
Interestingly, this divergence between "viewing the world through processes" and "viewing the world through objects" appeared as early as Plato's time. In his dialogues, Plato explored whether people should understand the world through processes or objects. The concept of "atoms" itself originated from this intellectual tradition. In other words, the choice of abstraction is not a new problem; it has just been reapplied to the software field.
Additionally, functional programming ideas gradually took shape during this period. After the inventor of Fortran completed the language, he turned to exploring programming paradigms centered on stateless mathematical functions. I interviewed him a few months before his death and asked why functional programming never became mainstream. His answer was: functional programming makes "difficult things easy" but makes "simple things extremely difficult." This also explains why it has always occupied an important position but never became the dominant paradigm.
Thus, we arrive at the end of the first golden age of software engineering and gradually move toward the second generation. The forces driving this transformation include continuously growing system complexity, difficulties in large-scale development, and further recognition of the value of distributed systems in the defense field. Meanwhile, the maturation of miniaturization technology also gave rise to personal computers.
The Second Golden Age: From "Processes" to "Objects," Software Becomes Complex
Gergely: This was mainly thanks to breakthroughs in transistor and electronic technology, right?
Grady: Exactly. This was an extremely dynamic era. Hobbyists began to be able to assemble computers themselves. Before this, participation at this scale was almost unimaginable.
Gergely: Is this the first time in computing history that enthusiasts could truly participate on a large scale?
Grady: Yes. Economic conditions improved, combined with the military-promoted production of transistors and integrated circuits, making it possible for ordinary people to obtain these components. In Silicon Valley, people could directly purchase and experiment with these technologies.
"Play" has always played an important role in software history. The late 1970s to early 1980s was a highly experimental era. A book called "What the Dormouse Said" points out that the rise of personal computers was closely related to the hippie counterculture. This was a spirit of "decentralization," closely connected to figures and communities like Stewart Brand and the Merry Pranksters, and also gave rise to early online communities like The WELL—later known as electronic bulletin board systems. Overall, the late 1970s to early 1980s was a stage full of possibilities, with many new technological paths and ideas sprouting simultaneously.
At that time, we gradually realized that software engineering was undergoing an important transformation: people began to understand the world not just through "processes" but through "objects" and "classes." At the same time, the demand for distributed systems and the real pressure to build increasingly complex systems together formed a "perfect storm" that drove the second golden age of software engineering.
Frankly speaking, I entered this field at exactly that stage, just at the right time and place. I was working at Vandenberg Air Force Base, participating in projects related to missile systems and aerospace systems, including a proposed military space plane program. It was a very interesting place because there would be one or two launches almost every week. You would run out and watch the rockets lift off, exclaiming "this is incredible."
By the late 1980s, the world was ready for a new concept of software, which was object-oriented programming and object-oriented design. The biggest difference compared to the first generation of software engineering was the change in abstraction level. We no longer just viewed data as a raw data lake operated on by algorithms, but integrated data and behavior into the same concept, forming objects. This approach greatly expanded the complexity of the systems we could build and laid the foundation for much important software.
If you go to the Computer History Museum and look at the source code of MacWrite and MacPaint, you'll find they were written in Object Pascal and are among the most elegantly structured software I've ever seen. Their design is rigorous and clearly organized, and many design decisions can still be seen continued in modern systems like Photoshop. This itself also illustrates an interesting fact about software lifecycles.
Understanding software from the object perspective proved to be an extremely effective method because it provided a new path for solving software complexity problems. Just like the first golden age, the second golden age was equally vibrant. In the 1980s and 1990s, a group of important figures emerged, such as the "triumvirate" (myself, Ivar Jacobson, and Jim Rumbaugh), as well as Peter Coad, Larry Constantine, Ed Yourdon, and others, who together drove the shift in thinking from "processes" to "objects." Of course, we also made mistakes, such as overemphasizing inheritance at one point and treating it as a panacea, which later proved not entirely correct. But the basic idea of understanding the world from classes and objects was ultimately preserved and settled.
At the same time, this was also an economic turning point. As systems continued to grow in scale, platforms began to emerge. In fact, there were precedents for this in the first golden age: people repeatedly built the same functionality, so they began packaging and reusing common algorithms and procedures, such as disk operations, terminal output, and sorting algorithms. These practices ultimately gave birth to the concept of software sharing. In the commercial systems field, IBM's SHARE user organization was a typical example—it was a community spontaneously organized by customers to share software with each other.
Gergely: Is this a relatively primitive "packaging" method? Like collecting sorting algorithms or some common functions for distribution?
Grady: To clarify, this wasn't done officially by IBM; it was entirely driven by the user community. This was actually one of the earliest forms of open source software. The hardware and software economic structure was also different at the time. Software was usually provided free with hardware until the late 1960s, when IBM realized that software itself could be a commodity and began decoupling hardware and software and charging separately.
In the earlier stage, the community atmosphere was very open. People would say, "I wrote this tool, you go ahead and use it." This was the embryonic form of the open source spirit. Similar things happened again in the second golden age, except at a higher level of abstraction. People started sharing libraries, components, and tools, such as programs for driving CRT displays. These things themselves didn't constitute competitive advantages, but they greatly enhanced the ability to build complex systems.
Against this backdrop, platforms gradually formed. As libraries and components continued to grow in scale and distributed systems rose, we began discussing "service-oriented architecture" (SOA). HTML and HTTP made information exchange possible, and people began envisioning sharing images, messages, and even services through networks. Protocols like SOAP were thus born, becoming the precursors to the platform era. These changes laid the foundation for the platform economy during the second golden age.
Gergely: What does the rise of "platforms" specifically refer to? How do you define a platform?
Grady: AWS and Salesforce are typical examples. These platforms are like "economic castles" surrounded by moats. Platform providers allow you to pay to cross the moat and use their capabilities, on the premise that if you built it yourself from scratch, the cost would be much higher. Therefore, in the second golden age, we saw the rise of SaaS-type companies because the complexity and cost of certain software were high enough to support this business model. Entering the late 1990s and early 2000s, this was an equally vibrant stage. The internet developed rapidly, and software began truly penetrating into every crevice of society, becoming part of civilization's infrastructure. Email is a typical example.
Gergely: When did you get your first email address?
Grady: 1987, when it was still ARPANET. As software became daily infrastructure, many problems that the first golden age worried about gradually "disappeared"—not ignored, but internalized into the systems. Excellent technology "evaporates" and becomes part of the air. The second golden age is the foundation of today's software world.
Around 2000, the internet bubble burst, and the economic logic couldn't support the previous expansion. There was also the Y2K problem. In retrospect, it seems "nothing happened," but that was because countless engineers put in enormous effort to avoid disaster. This is a typical example of "the best technology is invisible."
Gergely: I still remember the tense atmosphere before Y2K. There were even movies predicting the end of the world. As a result, nothing happened, and many people stopped believing in such warnings.
The Third Golden Age: Not the AI Era, But the Maturation of Software Engineering
Grady: At this point, I'd like to add a historical thread that wasn't expanded upon earlier: AI.
AI's first golden age appeared in the 1940s and 1950s, centered on symbolism, with representative figures including Herbert Simon, Newell, and Minsky. Neural networks were also attempted at the time, such as artificial neurons implemented with vacuum tubes, but due to computational power and theoretical limitations, they ultimately failed, and AI entered winter. The second golden age appeared in the 1980s, represented by rule systems and expert systems. Despite hardware and architecture improvements, they still couldn't scale and eventually stalled again.
Entering the 21st century, I believe we are already in the third golden age of software engineering. Its hallmark is another leap in abstraction level: from individual programs to platform-level libraries, frameworks, and services. We no longer implement messaging systems or data management ourselves, but directly use existing platform capabilities. The emergence of AI programming assistants is a response to this growing complexity. They are not accidental, but a natural result of the evolutionary logic of the third golden age.
Current problems are different from the previous two generations: software scale is unprecedentedly large, security, supply chain attacks, and system trust have become core issues; meanwhile, the importance of software giants gives them "systemic risk"; additionally, technological ethics issues have been pushed to the forefront: just because we can do something, does it mean we should?
Gergely: Many people rarely look back to think about how all this started and how "young" the discipline of software engineering itself is. Even measured from the 1970s and 1980s, it barely counts as two generations of history.
But now I generally sense a mood in the industry, especially in this recent stage, that many software engineers are experiencing a distinct "existential anxiety." This anxiety has clearly accelerated after the winter break. Before the winter break, AI and large language models were mainly just auto-completion tools that could occasionally generate some code; but after the winter break, new generations of models can already write code of quite high quality, to the point where I've started to truly trust them.
Historically, writing code has always been a difficult thing. We often need years of training to become proficient, and even longer to become truly excellent. So now many people are falling into a deep unease: how did machines suddenly become able to write such good code in just a few months? What happens next? Coding has been highly bound to software engineering, but now it seems this is no longer the case. From your historical perspective, how do you understand what's happening now?
Grady: This isn't the first time software engineers have experienced such an existential crisis. Similar anxieties appeared in both the first and second generations of software engineering. So my attitude has always been: history repeats itself, and this round will also pass. I often tell worried people not to panic and to focus on the fundamentals, because those capabilities won't disappear.
I once met Grace Hopper, who realized in the 1950s that software could be abstracted from hardware. This idea was extremely subversive at the time and even triggered strong opposition. Many early computer engineers believed that if software wasn't tightly attached to hardware, it would be impossible to write efficient code. They worried this approach would destroy the entire industry. As it turned out, they were wrong.
Similar debates also appeared when Fortran emerged. Some believed that hand-written assembly code would definitely be more efficient than code generated by compilers, but as abstraction levels increased, this view was overturned time and again. Every leap in abstraction makes some existing skills lose their central position, and these changes are often driven by engineers themselves.
In the past, such shocks didn't trigger such intense emotions, partly because the practitioner base was small, perhaps only a few thousand; today, this group is in the millions. People naturally ask: What about me?
I'm also often asked similar questions by young engineers: "Did I choose the wrong direction?" My answer has always been: now is exactly a good time to enter the software industry. The reason is simple: we're experiencing a new leap in abstraction level, just like from machine language to assembly language, from assembly to high-level languages, from high-level languages to libraries and platforms.
This change is a great liberation for me. I no longer need to focus on numerous tedious details, but the basic principles of software engineering still exist. As long as you're building software that needs long-term maintenance and needs to stand the test of time, these capabilities are irreplaceable.
Of course, if you're just writing throwaway code, you can use any tool. I see many people using AI agents to complete such "use-and-discard" automation tasks, which is completely fine and even very valuable.
This reminds me of the "hobbyist culture" of the early personal computer era. Back then, many people who had nothing to do with software started writing their own programs, generating many new ideas. The situation today is extremely similar. You can use AI to automate some things that weren't economically worth doing in the past. Even if these results may not exist long-term, they still create value.
Gergely: Just like when ordinary people could afford personal computers back then, now many people outside the industry are starting to write software. I just talked to my neighbor upstairs—she's an accountant but is already using ChatGPT to help her team write code and improve process efficiency.
Grady: In the early personal computer era, artists and gamers rushed into this new medium, creating unprecedented vitality. The same is true today. Many so-called "career crisis narratives" actually stem from a narrow understanding of the industry. What's really happening is: software is expanding to a broader population. Non-professional developers will write more software, and I think this is an extremely positive change. Just like the personal computer counterculture back then, history is repeating itself.
Dario's Judgment Is Fundamentally Wrong: Tools Change, But Software Engineering's Problems Don't
Gergely: Anthropic CEO Dario Amodei once predicted that about 90% of code would be generated by AI. At the time, it sounded exaggerated, but later facts proved he wasn't entirely wrong. Recently he said something even more unsettling: "Software engineering will be automated within 12 months." Considering that coding is only part of software engineering, this statement seems even more radical. What do you think?
Grady: I have a thing or two to say.
First, I use Claude. I use Anthropic's products. I think it's my preferred system. I use it to solve JavaScript, Swift, PHP (yes, even PHP), and Python problems. I do use it, and it helps me a lot—mainly because I want to use certain libraries: Google search is terrible, documentation is also terrible, so I use these agents to accelerate my understanding of them.
But don't forget: I have at least a year or two of experience in these areas—okay, to be precise, decades—I understand the basic principles. That's why I say: fundamentals won't disappear. This holds true in any engineering discipline: basic principles don't disappear; what changes are the tools we use.
Therefore, Dario, I respect your expression, but I also have to recognize: he and I have different perspectives. He's leading a company that needs to make money, and he needs to speak to stakeholders. So he'll say some "sensational" things. I remember he said these things at Davos?
Gergely: Yes.
Grady: If I were to evaluate his conclusion using a "technical term," it would be: it's profoundly wrong. There are several reasons.
First, I accept part of his point: it will accelerate certain things. But will it eliminate software engineering? No. I think he has a fundamental misunderstanding of "what software engineering is."
Returning to what I said at the beginning: software engineers are engineers responsible for balancing various forces. Code is just one of our mechanisms, but not the only factor driving us. And the things he or his colleagues discuss don't touch any of those decision problems that software engineers must face—none of those are within the scope of what's currently called "automation."
His work mainly focuses on the lowest level of automation—I would liken it to what compilers did back then. So I say: this is a new rise in abstraction level. Developers, don't be afraid: tools are changing, but the problems haven't disappeared.
My second reason for refuting him is: systems like Cursor are mostly trained on a set of problem types we've seen repeatedly. That's fine. Just like in the first generation, the first golden age, we also faced a relatively fixed set of problems, so we built libraries around them. The same is true now.
If I want to put a UI on top of CRUD, or do something Web-oriented, of course I can do it. More importantly, this capability is starting to "descend"—many things that previously required professional engineering capabilities can now be directly delivered to more people through higher-level abstractions. Most people won't start a business because of this, and of course, a very few will be able to turn it into a product. But the key is: higher-level abstractions enable them to do things they couldn't do before.
And Dario ignored one thing—I'll borrow a Shakespearean paraphrase: "There are more things in the computing world than are dreamt of in your philosophy." The computing world is far more than just "scalable Web systems." We do apply many things to these Web-centric large-scale systems today, which I think is good and great, but it also means: there are still many things outside that haven't been automated. We will continue to push the "boundary/frontier" further out.
I told those stories at the beginning because history is repeating itself—or as some say, history rhymes. The phenomenon happening today is essentially the same, just occurring at different abstraction levels.
This is the first point: the software world is bigger than what he's looking at. It's not just "software-intensive systems," and not just the small piece he's focusing on.
The second point: if you look at the types of systems these agents mainly handle, they're essentially automating patterns we've seen repeatedly and that they've also learned repeatedly in training. Patterns themselves are a new abstraction: they're not just individual algorithms or individual objects, but "societies" composed of groups of objects and algorithms working together.
These agents are very good at automating "pattern generation." I want to make something; I can describe it in English—because patterns were originally described in natural language.
That's why I think he's wrong. Good luck to him. But I think this is a more exciting era, not an era that requires existential panic.
Let me tell another story about "rising abstraction levels."
English is a very imprecise language, full of ambiguity and nuance. You might think: how could this language possibly become a "useful language"? The answer is: we as software engineers have actually been doing this all along.
I go to someone and say: "I want the system to do this, roughly like this." Then I give some examples. I've been doing this all along. Then someone will convert it into code. In other words, we've long been at a higher level of abstraction: I say "I want it to do something."
Here's a specific example: I was recently using a library I'd never touched before—JavaScript's D3, which can do fascinating visualizations. I went to a website called Victorian Engineering Connections. That's a lovely little site, a project someone named Andrew made for a museum. You can enter a name, like George Boole, and you can see his information and the social network around him. You can click to explore. It's very cool.
I thought: "I want something like this too, but oh my, I have no idea how to make it. What should I do?" He gave me the code, and I found it used D3. I knew nothing about D3. So I told Cursor: "Make me the simplest version: five nodes, display them." This way I could study the code.
Then I could continue saying: "What I really want to do is this. Make the nodes have a certain style, depending on their type." Just like I was making a request to a human collaborator, I was expressing my needs in English, and now I don't have to toil to turn it into reality—I can talk to the tool and let it help me complete it.
So it shortened the distance between "what I want" and "what it can do." I think this is great; it's a breakthrough.
But don't forget, I said to Dario: this only works in certain scenarios—when I'm doing something others have done hundreds or thousands of times. In Feynman's style, he might say: "Do it yourself; that's the only way you'll understand." My reaction is: that's true, but there are too many things in the world I'm curious about. I can't possibly understand everything from scratch by myself. So let the tool do part of it for me—I'll decide what I want to do.
That's why I say these tools are a new leap in abstraction level: they're shortening the distance between my needs expressed in English and the final programming language.
Finally, I want to say: what language is both precise and expressive enough to build "executable artifacts"? We call it a programming language. Coincidentally, English under certain conditions is also "programming-language-like" enough—a bit like COBOL: if I express needs with sufficiently clear phrases in a sufficiently structured domain, it can give me a "good enough" solution, and as someone who understands the fundamentals, I can then push forward, correct, and clean up these details.
This is why fundamentals are so important.
Low-Hanging Fruit Gets Picked First: Time to Climb Up
Gergely: Every leap in abstraction level makes some skills obsolete while creating demand for new skills. For example, after transitioning from assembly language to higher-level languages, the ability to be familiar with a piece of hardware's instruction set and optimize for it was replaced by higher-level abstract thinking. In the current leap, we could say we're entering a stage where "we no longer need to write code by hand; computers generate it automatically, and we just need to check and fine-tune." As software practitioners, which skills will gradually become obsolete? Which capabilities will become more important?
Grady: Today's software delivery pipelines are much more complex than they should be. Without a mature pipeline, even just getting a system running is very difficult. And inside companies like Google and Stripe, they've built massive and highly customized infrastructure systems.
Because of this, there's a lot of "low-hanging fruit" that's very suitable for automation. I don't need humans to fill these tedious and marginal links. In these areas, we're likely to see the emergence of intelligent agents: when I need to quickly deploy resources in a certain region, I don't want to write that complex, messy code myself; I'd rather let an agent system complete it. This kind of work has obvious automation value in terms of economic efficiency and safety, so it will indeed bring about the loss of some jobs. Correspondingly, people need to relearn and shift to higher-level application building.
Similarly, skills that previously focused on developing a certain iOS app or specific frontend product will also face job contraction. Because now, many things can be done just through prompts. This isn't bad; it enables a new generation to complete work that previously only professional engineers could do, just as everything that happened in the personal computer era.
Therefore, what people should do is not resist change, but move up one abstraction level and start focusing on the "system" itself. I think the current transformation is no longer from programs to applications, but from applications to systems. The new focus of skills is: how to manage complexity in large-scale environments, how to handle both technical and human factors simultaneously. As long as you have this systemic capability, your work won't disappear but will become more scarce and needed.
Gergely: For students, newcomers about to enter the industry, and senior engineers who want to go back and solidify their foundations, where would you suggest they start?
Grady: When I face extremely complex problems, what I most often return to is systems theory. You can read the research results of Herbert Simon and Allen Newell, or follow the Santa Fe Institute's related work on complexity and systems. These basic principles of systems theory provide solid anchor points for me to build next-generation systems.
I once participated in NASA's Mars mission research. The problem at the time was: how to support humans executing long-term deep space missions and let robots operate autonomously on the Martian surface. This was essentially a systems engineering problem because these capabilities had to be "embodied" in spacecraft. And a large amount of AI software today is de-physicalized; they have no direct connection to the real world.
During that time, I was also learning from several neuroscientists, trying to understand the structure of the brain. It was in this process that I realized certain structural patterns in systems engineering could be directly applied to the design of ultra-large-scale systems. For example, Marvin Minsky's "Society of Mind" model is essentially a multi-agent system architecture, and we're only just beginning to truly tap into the potential of agent programming.
Similarly, there are the "blackboard model" and "global workspace" proposed in early AI systems, and the subsumption architecture proposed by Rodney Brooks, all inspired by biological systems. Biological systems themselves possess highly effective architectural approaches, capable of completing complex behaviors even without central control.
Therefore, returning to your question, my advice is: re-examine architecture from a systems perspective, draw inspiration from biology, neuroscience, and real-world complex systems. Many problems have actually been studied before; we've just reapplied them in new contexts. The basic principles of engineering haven't disappeared.
Gergely: Looking back at previous leaps in abstraction levels and the golden ages of software engineering, what did those who stood out in the early stages of new eras—even if they weren't outstanding in the old era—do right? From history, what advice would you give?
Grady: As I mentioned earlier, what truly limits us in the software field is actually imagination. Of course, we're also limited by physical laws, algorithmic capabilities, and ethical constraints. But what's happening now is: much of the friction, cost, and resistance in the development process is disappearing. This means we can finally put more attention into imagination itself, to build things that were previously impossible to implement. Before, we couldn't do it because we couldn't organize enough people, couldn't bear the costs, and couldn't achieve global reach. Now, these limitations are being lifted.
Therefore, you should view all this as an opportunity. For some vested interests, this may be a loss; but overall, this is a net gain. It liberates our imagination and enables us to do things in the real world that were previously impossible. This is an exciting and also frightening era, but this is exactly how it should be. When you stand on the threshold leading to something new, you can choose to gaze into the abyss, fearing to fall; or you can choose to take the leap and spread your wings. Now is the time to fly.
Reference link:
https://www.youtube.com/watch?v=OfMAtaocvJw
Disclaimer: This article was compiled by InfoQ and does not represent the platform's views. Reproduction is prohibited without permission.