Edited by Yun Zhao
“AI killed the individual contributor, and software developers are turning into managers!”
“People graduating ten years from now might change jobs 50 times before they retire!”
Philip Su, founder of Superphonic, former Head of Website and Engineer at OpenAI, and former senior leader at Microsoft and Meta, recently published an article titled “AI Is Killing the Individual Contributor,” depicting the fundamental shift AI is bringing to the programming experience.
In the article, he stated that due to the emergence of AI, the foundational software development role of “individual contributor” is disappearing permanently. Regardless of your programming ability (in fact, especially if you are better at programming than AI), this fact cannot be changed.
In contrast, software developers will increasingly lean towards the role of “manager”: prioritizing between different agents, judging which of two different LLDs (Low Level Designs) is more reasonable, and planning the work for the next week—essentially “delegating” almost all execution work.
In a recent interview podcast, Philip Su reiterated this point. He shared his experience at Meta: he reached level E9 at Meta, then voluntarily stepped down to E7, just to become an IC and write code again. However, with AI's programming capabilities becoming stronger and stronger, he believes the traditional IC role may be over—that mode of sitting alone in an office focusing on solving technical problems and discussing trade-offs with two or three colleagues.
Furthermore, Philip proposed some refreshing views, such as that in the future, “human intervention” will actually become a sign of declining code quality. Companies might deliberately hide the fact that “humans modified the code” because it will be seen as a quality risk.
When will AI replace human daily work? The key lies in accountability. If your Claude Bot makes a mistake, the responsibility lies with you. Issues of accountability and regulatory systems are also reasons why some jobs remain safe for now.
Regarding the relationship between AI and human work, the current mainstream view is divided into two camps: the Augmentation Camp and the Replacement Camp. The Augmentation Camp believes our generation won't be completely replaced; the Replacement Camp believes jobs will disappear and advocates planning ahead for second and third-order consequences. But regardless of the camp, believing “AI won't change my job” is the only definitely wrong judgment.
The editor has compiled the full text of the podcast without changing the original meaning. The guest's points are quite inspiring; it is recommended to bookmark and read it carefully. Enjoy!
The Traditional Role of the Individual Contributor Is Over
Host:
Philip, thank you for coming back. In that article “AI Killed the IC,” you proposed a point: traditional roles are dying out, and everyone will become a manager. You wrote that we have crossed the “event horizon,” Pandora's box has been opened, the genie is out of the bottle, and there is no going back. Have you ever experienced that “coding agent moment” yourself?
Philip Su:
Yes. I started using AI programming tools about two years ago, starting with GitHub Copilot. In the early days, it was more like code completion, such as completing enums, syntax structures, etc. At that time, I felt it could indeed save a little time, but it was hard to convince former colleagues that AI would play an important role in their code work. However, looking at the last two months, the situation has undergone a fundamental shift. Many people who were originally skeptical are now starting to accept the possibility: this isn't just a flash in the pan; it really can complete a considerable scale of software work, not just complete a few sentences.
Host:
To avoid the show feeling “outdated,” let me clarify: today is January 29, 2026. Producing an episode takes a few weeks. If some of what we say by then lags behind reality, it's because of the current point in time. I think I need to give a heads-up.
Philip Su:
This recording might be considered to have taken place three months before the “singularity.”
Host:
Oh my god, this episode might “expire quickly.” I hope everyone remembers this sentence: software developers are becoming managers. I really want to hear your opinion on this, because you reached E9 at Meta, then voluntarily stepped down to E7, just to be an IC and write code again. Your career trajectory has gone in reverse, and more than once. So, is the concept of “being an IC” now obsolete?
Philip Su:
I switched between management and IC roles about six times in my career. I do think the traditional IC role may be over—that mode of sitting alone in an office focusing on solving technical problems and discussing trade-offs with two or three colleagues. Because if you think about the time you spend working with AI now, many of the things you do are actually what traditional managers do: prioritizing between different agents, judging which of two different LLDs (Low Level Designs) is more reasonable, planning next week's work, essentially “delegating” almost all execution work. So the work has become “meta-work,” which is the essence of most management roles.
Host:
So does that mean all ICs will become managers, and managers won't be needed? Or that managers won't need ICs?
Philip Su:
I think both scenarios will happen. We've all had this experience: walking into a traditional “non-software” industry, like a plumbing, HVAC, or dental clinic, and software engineers often think, “If you gave me a week, I could overhaul your entire system.”
Host:
That's our arrogance. Once you actually get in, you realize there are many scenarios we hadn't considered at all.
Philip Su:
Exactly, the problems are indeed complex. But I do believe software engineering will expand into these traditional fields, enabling people like dentists to build the tools they need themselves. I recently listened to an episode of the “Hard Fork” podcast where they invited non-programmers to share their success stories using AI. A plumber said he relied entirely on AI to write a client scheduling system for himself. Going back to your question—are managers still needed? I think when we expand into traditional non-software industries, a lot of new businesses will emerge, which is an “increment.” As for the management structure, I think it will still exist. Even in today's organizations, there are already managers of managers, with many layers. This is a human collaboration issue that still requires humans to solve in the short term.
Host:
Are we at a “layover” or the final destination? If all ICs become managers, why can't AI just be the manager?
Philip Su:
That's not naive. I've said similar things for years, and it sounds less and less crazy. I often say to people: Will this system really still not be able to do it by the year 3000? Is there any “fundamental” capability that AI can never possess? There are indeed very smart people, like Yann LeCun, who say large language models fundamentally cannot complete certain tasks.
But even if that judgment is correct, I think there are two problems with this argument. First, there are indeed some top-tier humans that AI will find hard to surpass for a long time. But most people in the world are not that kind of top talent. Just like when Deep Blue played Garry Kasparov. You could say AI can't beat the strongest human chess player yet, but in fact, even in the Deep Blue era, it could already defeat most humans. Our real problem isn't “Is there a human who can surpass AI in the next 50 years?”, but rather, if AI is stronger than 90% of people in a certain field, what happens? For example, 90% of managers—if they are like the manager in Dilbert, can AI replace them? Probably yes.
AI Is Getting Stronger, But Accountability Issues Remain Unresolved
Host:
In the past month, I realized I might be a bit of a Luddite, constantly denying AI programming and agent capabilities. I found myself constantly “moving the goalposts”: it can't do LeetCode problems, it can't solve Math Olympiad problems, it can't refine requirements... but then I realized there are two types of constraints: temporary constraints and fundamental constraints. And what I latch onto each time is a temporary constraint. People often say AI is the worst it will ever be in its life because it only gets better; its capabilities increase monotonically. So I started looking for “fundamental constraints.”
Actually, there is a third type: although not a fundamental limit, it may not reach it within my lifetime. You say 3000, but I probably won't live to see 2100. I just need to “outrun” it within my professional life cycle. As for my children and grandchildren, that's another matter.
For example, I have a college classmate who is a radiologist. For 20 years, I've been telling him that your job will eventually disappear. Because he has very rich training data—MRI, CT, X-ray images, and corresponding diagnostic results; the training path is structured. In theory, why is he still able to keep his job?
This is very similar to programming: code itself is language expression, and GitHub data can be scraped. But radiologists haven't been replaced. I remember Andrej Karpathy saying their jobs are safe in the short term. Later I figured out the core lies in the “accountability mechanism.” He has to sign off on every diagnosis. If he says it's cancer but it isn't, or vice versa, the responsibility is his. There can be insurance, but the subject of responsibility is him. And AI cannot “bear the consequences.” You can hand your bank account to AI and let it do business for you to make money, but if it fails, you can't really “fire” it.
Philip Su:
You raised a very important point: regulation. Radiologists still exist, partly because of the regulatory system. Similarly, in other industries there are unions. For example, the London Underground: even though trains can already be remotely driven automatically, the union still insists on having a driver on the train, and to stay alert, there must be a companion. So regulation and union behavior will delay replacement.
But I also thought of the concept of “corporate legal personhood.” It took humans a long time to accept that a company can exist as a “legal person,” able to sign contracts and bear responsibilities. Once that abstract concept was accepted, a massive amount of innovation was released. I think AI might be similar in the future—perhaps some form of “legal personality” will be granted to establish an accountability structure.
Host:
What does it mean for a model to have “legal personality”? Under the current architecture, your API token points to a model in a data center. How does it bear consequences? Does OpenAI bear it? How does the model itself bear it?
Philip Su:
That's a good question, and I don't know how to draw the line either. For example, if your Tesla can automatically rent itself out as a robotaxi and then has an accident, is the liability yours? Is it Tesla's? I don't know how responsibility will be allocated in the future. But I am certain that as people use agents more and more, these kinds of legal issues will definitely surface. Someone will eventually use an AI agent to make a catastrophic decision, and the case will enter the judicial system. The answer isn't clear yet, but it will definitely be tested by reality.
Future Human Intervention in Coding Will Be Seen as a “Quality Risk”
Host:
Back to software development. When did you really feel the switch had been flipped?
Philip Su:
I'm an early adopter of AI tools, maybe because I'm lazy and don't want to write repetitive code, or maybe I'm just technically inclined. The past year has seen huge changes. In the past six months, I've written very little code. In the last two months, with models like Opus 4.5, the latest Claude Code, and Codex 5.2 emerging, my trust in AI has increased significantly. Although not to the point of letting go completely, now working with two code review bots on GitHub, I review code much less carefully than before. We would start multiple agents simultaneously to edit files in parallel.
Host:
Anish (partner at a16z) said we are entering a “YouTube moment” for software. When YouTube first appeared, people said how can ordinary people make videos? But later we found we didn't lack content, and YouTube even replaced many TV viewers. Similarly, now non-programmers can use AI to realize their own app ideas.
But TV and movies haven't disappeared. For core company systems, like S3 or Azure, would you really trust “vibe programming” without strict review?
Philip Su:
Maybe not right now. But I've had an experience using code review bots over the past six months. Two and a half years ago when OpenAI tried to make a code review bot internally, there were many false positives and the quality was poor. Now if you use one or two review bots on GitHub, you will find the issues it points out are often missed by human reviewers. When quality crosses a certain threshold, we might flip: we will only use cloud services that guarantee no unqualified humans have touched the code.
Host:
Are you saying that in the future “human intervention” will actually be a sign of declining quality?
Philip Su:
Yes. Perhaps one day, companies will deliberately hide the fact that “humans modified the code” because that will be seen as a quality risk.
Taste Is Just a Temporary Limitation for AI
Host:
That's crazy. I hadn't even thought that humans touching code would be a negative signal. Now some people say humans won't be replaced because we have “taste.” Is taste a fundamental limit or a temporary limit for AI?
Philip Su:
I think it's temporary. There are indeed a few truly outstanding people. But every artist feels unique, thinking inspiration comes from the sky, not from learning other people's work. In fact, all artists look at other people's work. The reason we feel original is because we cannot trace the training source. Taste is similar. Think about it, among the colleagues you've worked with in the past, how many really have outstanding taste? The average person doesn't actually do very unique work.
Will Humans Still Create If AI Scrapes Originality?
Host:
Speaking of originality and attribution. Being inspired is normal, but if AI obviously mimics a specific person's style, is that reasonable? The US Constitution has a patent clause to protect innovation. Now AI scrapes data without a similar system; will we head toward systemic collapse?
Philip Su:
I agree that the current legal system isn't ready for how to incentivize creation. Patents last for seventeen years, but the business environment iterates at a completely different speed now. Does the system need to shorten the cycle? I don't know the answer. But many so-called “originals” are actually recombinations of elements from different fields.
Host:
It's the same with music; many pop songs use the same chords. But when the cost of imitation approaches zero and original creators have no protection, why publish publicly?
Philip Su:
Some musicians must create even if they don't make money. Many programmers will write open source code even without pay. The human impulse to create won't disappear. Of course, if there is a better legal structure to reward creators, that's better. But even if everything approaches zero, I don't think everyone will just lie on the couch scrolling through Netflix. There will always be a group of people who create out of interest.
Host:
I don't think we will head toward a “marketless” world. I support market mechanisms, but we need rules to avoid excessive concentration of power. The market always moves to new frontiers. Software used to be expensive, now it's cheap. In the past, a “drunk uncle” proposing an app idea would ask you to write the code; now he can realize it himself. So will we never have to pay for apps again? Or does making a truly excellent app still require skills?
Just like YouTube, the barrier to video production has lowered, but truly high-quality content is still scarce. The question is, when AI can also mimic and extend these skills, where will the market turn to? Will what is truly valuable always be the part AI cannot do? This seems like it will become a continuously moving boundary.
Philip Su:
Regarding the “moving frontier,” I think it's essentially about striking a balance between profit and consumer surplus. When the cost of producing useful software approaches zero, if you can still earn huge income by producing software, I would view that as a market failure—because that's essentially rent-seeking. As costs approach zero, value will gradually convert to consumer surplus, meaning people can use software almost for free and benefit from it. It's just that few of us think from the perspective of consumer surplus. Everyone is thinking “how can I make money from this,” without realizing: the fact that I can make money is precisely because the market isn't free enough and competition isn't sufficient enough; otherwise, profits would be compressed to zero. But when profits go to zero, it actually means consumer surplus has been maximized.
Let me give a concrete example. Sixty years ago, large companies in the US would typically have at least one entire floor of secretaries in any large office building. Their full-time job was typing and writing memos. Later, word processing software appeared, and everyone could type for themselves.
On one hand, I would bet that 90% of secretaries back then would laugh at those who replaced them for having poor typing skills; but on the other hand, typing ability was democratized. Now more documents are being typed than ever before, even if the typists aren't professionals. The same is true for software—in the future, more software will be written by people with average technical skills. Overall, this is a good thing for the world. Although those who truly master software will sit in coffee shops complaining that “current software is all garbage,” just like the typists of the past would show off their 180 words per minute and mock the newcomers.
Host: The mail merge function directly wiped out that entire layer of people responsible for entering addresses one by one into template letters. I understand this analogy. But your point just now shocked me a little—are you saying that human participation in software will actually become a drag on software quality? That is, no matter whether you are “competent enough,” now everyone is “good enough,” and software has no moat? We used to say the moat was in “taste,” but I'm starting to feel that even taste can be mimicked.
Philip Su: I completely agree that what you raised is a core issue. Think about those DJs; they don't create music, but they demonstrate their “taste” through selection and mixing. The world acknowledges they have taste. But in the long run, is this field really something only humans can do? Even if AI currently lacks musical taste, it possesses infinite patience—it can try ten thousand combinations and then select the two successful ones. It will overwhelm humans in terms of quantity and scale.
Host: On the issue of AI, my own “taste” swings back and forth. One day I feel never more powerful because of AI support; the next day I feel my value is approaching zero. I switch between these two extremes almost every day. On a macro level, will our economy take off because of this, with GDP soaring, or will capitalism collapse?
Philip Su: I often argue with friends in the tech circle: will AI replace jobs or augment jobs? The answer may not be 100% on one side. But one thing is certain—no matter which camp you believe in, the speed of change in software and work itself in the coming year will be faster than ever before. No matter which side you stand on, if you think the work a year from now will be the same as now, you are almost certainly wrong. If you don't actively bet on an “augmentation future” or a “replacement future,” you are almost destined to be left behind.
Host: So, believing “AI won't change my job” is the only definitely wrong position?
Philip Su: That's right, this is the only definitely wrong judgment. It's like the lady who proudly withdrew her cash before the 1929 stock market crash. Years later, a reporter asked her about her wise decision back then, and she said, “I still have that money.” You might have made a right judgment once, but if you maintain the status quo for the decades after, that's not necessarily the correct strategy. Now, the only definitely wrong judgment is thinking your job will stay the same.
AI Replacing Humans? No, It's the Augmentation Camp
Those Middle-Class People Who Advise Others to Be Plumbers Don't Go Themselves
Host:
It's now late January 2026. Many people have experienced their own “Cloud Code moment” and realized the trend is irreversible. The question now is, which side are you betting on in your career? Betting on the Augmentation Camp—believing our generation won't be completely replaced; or betting on the Replacement Camp—believing jobs will disappear and planning ahead for second and third-order consequences?
Philip Su:
If you stand with the Augmentation Camp, you should be running eight AI agents simultaneously right now, building custom sub-agents, subscribing to Cursor, Claude Code, and other advanced tools, even spending $900 a month, because you want to be an efficiency monster. If you stand with the Replacement Camp, you should actively upgrade your skills, assuming your job will disappear. I find it funny that those tech middle-class people often advise others to be plumbers or welders, but they don't go themselves. If you really believe in replacement theory, you should go learn welding yourself.
Host:
Indeed. Talk is cheap, look at actions. At least for software developers, if you don't embrace AI tools now, you are already losing.
Philip Su:
Looking back at business trends over the past 45 years. In the 80s, creativity was king; after 2000, execution was king—“ideas are cheap, execution is key.” In the next two years, we will see the answer: when execution cost approaches zero, is creativity scarce? If creativity is scarce, then taste becomes key; if the world itself lacks good ideas, then execution is still the bottleneck.
Host:
I find it hard to believe that creativity will dry up after execution costs hit zero. For example, the “joke space”—the set of all possible jokes that can be told. AI currently doesn't perform well in this area. Code only needs one feasible solution to be enough, but jokes and poetry are different.
Philip Su:
We need to distinguish two questions here: When can AI be funnier than Dave Chappelle? And when can AI be funnier than the average open mic in Seattle? The latter is much easier. AI doesn't need to surpass everyone, it just needs to surpass the majority to cause enough economic disruption.
Host:
I still can't see AI writing soul-stirring poetry yet. Big companies are pouring energy into the code field. Protein folding has a bounded solution space, while the joke space might be infinite.
Philip Su:
Optimistically, there is another distinction: Do you buy because it's “better,” or because it's “human-made”? For example, handmade rugs, or when Starbucks replaced human baristas with one-touch automatic coffee machines. Automatic coffee might be more consistent, but many people are willing to pay a premium for “human participation.”
Will a world appear in the future where you are willing to pay a higher price for human digital art in AAA games, just like paying a premium for fair trade coffee? Human value no longer comes from beating machines, but from the fact that “we are human.”
Facts Prove: AI Only Makes People Do More Work, Not Replace You
Zero Software Cost Won't Lower Super Bowl Tickets Either
Host:
I just installed Claude Bot, now called Mold Bot. I haven't been this excited about AI in a long time. It doesn't feel like a tool, but like texting an employee. When do you think AI will replace daily office work, like filing taxes?
Philip Su:
Hard to say. A key question is liability. If your Claude Bot does something wrong, the responsibility is yours. But when more entities are involved in the future, the problem will become complex. Another question is: Will you “use fewer people to do the same thing,” or “use the same number of people to do more things”? For example, in your own business, will you lay people off, or will you expand the business while keeping the team size the same?
Host:
This is what I'm thinking about now. I left the software industry to start a business two years ago, and the result is I work more than I did at Amazon. Now I'm thinking: Am I building a lifestyle business or a growth business? If AI can help me, will I actually work less?
Philip Su:
The healthcare industry is an example. Every time a new tool is launched, the press release says “this gives doctors more time to spend with patients.” But the reality is, doctors spend less time with patients than ever before. They just see more patients. You will be the same. Unless you are extremely self-disciplined, you will convert efficiency gains into higher-intensity chasing.
Host:
Like smoking to get a promotion. We always think we will stop when we reach a certain goal, but actually, we get addicted.
Philip Su:
Exactly. Like Tim Ferriss wrote The 4-Hour Workweek, but I guarantee he is busier than ever now. Keynes predicted that by the end of the 20th century we would only work 15 hours a week. Productivity did rise as he predicted, but white-collar workers actually work longer hours now. The problem is: our desires have no upper limit.
Host:
Maybe that's why the “good old days of the 40s and 50s” were actually just low expectations. Houses were only 950 square feet then.
Philip Su:
Exactly. The median new home size is three times that of back then, but families are smaller. Our desires rise along with our capabilities. The AI Augmentation Camp believes this infinite desire will continue to absorb employment.
Host:
Then, like the replicator in Star Trek, if it can produce everything, where is the limit?
Philip Su:
The limit is in “status goods.” Like the best seats at a basketball game. Only one person can have them. Super Bowl tickets are an example. No matter how low the software cost is, the price of such goods will only continue to rise. Because their value comes from relative status, not production cost.
Host:
I just bought Super Bowl tickets, and they were ridiculously expensive. Zero software cost won't make the tickets cheaper.
Philip Su:
Correct. The rise in Super Bowl prices partly reflects wealth inequality. Stadium capacity is limited, and the rich are competing for the same scarce resources. A world may appear in the future where certain goods are ridiculously cheap, while certain status goods are ridiculously expensive, but people still buy them.
Future People's Work Roles Will No Longer Be Fixed
People Graduating Ten Years From Now May Change Jobs 50 Times
Host:
You mentioned the voice actor example earlier. I'm not sure if that person is your family or friend, or related to you, but just like I did just now—I asked my Claude bot Mobot to go to ElevenLabs and generate a “my voice” version. It actually did it, and then sent me a message using my voice. Theoretically, I could perform these steps myself, but the situation now is—I just need to tell it, and it automatically completes it and sends back the result.
Perhaps the problem is that the era when “your job role is fixed and you can do it for a lifetime” is disappearing. When I was a kid, people said that unlike your parents' generation, you might change jobs five times in your life. Maybe now what's changing isn't “whether you will change jobs,” but the frequency—the concept of a static career and static position is fading. There is a constantly expanding frontier of value because the universe is finite, so new things constantly become valuable. Perhaps the real “meta-rule” is—you must remain open to all changes.
In other words, if you graduate ten years from now, you might change fifty jobs before you retire.
Philip Su:
I think so. You mentioned the distinction between “maximizers” and “satisficers”—divided by this personality type, in the modern world, if you are a maximizer, you might find it hard to feel happy because there are too many choices. From the “Big Five” personality traits, there is a dimension called “openness to new experiences.” We are about to enter a world where half of the people will be very happy because they score high on this dimension. They will enjoy the environment of changing jobs every four years.
But the other half will be very miserable. They hate the world changing, yet have to adapt, otherwise they can't make a living. For these people, the future will be very tough.
Note: The Big Five personality traits is a mainstream psychological model that divides human personality into five core dimensions (Openness, Conscientiousness, Extraversion, Agreeableness, Emotional Stability) to describe a person's long-term stable behavioral tendencies.
Host:
This reminds me of a short story, possibly by Ellen Lightman. She envisioned if humans were truly immortal, two types of people would emerge: one type would be a doctor, then be enough to be a dentist, then a lawyer, then an adventurer, switching identities throughout their life; the other type would say, “I'm good like this.” What we are discussing now seems to be—that the first type of person will have the advantage in the AI era.
Philip Su:
They will thrive. Our world will undergo a change similar to the “Cambrian explosion.” The work environment is like being hit by an asteroid—either you are the kind of creature that adapts quickly, accelerates evolution, and seizes new opportunities during the drastic change; or you will constantly try to keep the old model running, and then slowly realize it has failed.
Philip's Startup Project: A Podcast Player
Host:
Talk about your project. You're still working on that podcast app. Why haven't you given up?
Philip Su:
That's a timely question. I am indeed thinking that in a world where everyone can build custom software for themselves, maybe within a year someone will copy my entire podcast player, just changing two features I didn't have time to do. That world is coming.
So the question is—can I still make this a lifestyle business? I suspect not. This has led me to treat it as a hobby, not a career. But what should I do next? I'm more uncertain than ever. There are too many things in the software field I can do just because I like them. But if I'm just someone entertaining myself, producing things useless to others, I feel that's not an ideal life either. So I hope to do something that is truly valuable to others.
Currently, I force myself to write an article on Substack every week, which is more about training discipline. I have never used ChatGPT or any large model to write a word for me because I enjoy writing itself. I tried using voice dictation tools for a few paragraphs, but even that reduced my joy. Writing is a physical act for me. Text generated by voice feels different to read. I'm a bit old-school—although I don't write by hand, I坚持 hitting the keyboard myself.
I don't want to turn life into pure self-entertainment. But I also admit that the marginal value of the skills I've relied on for a living may soon approach zero.
Host:
You should label your newsletter “100% Human Generated,” that will become a tag with value.
Philip Su:
And also deliberately misspell the word “human.”
Host:
Honestly, I use AI for my own newsletter too, but I've significantly reduced its use. Sometimes after writing, I feel the whole text is full of “AI flavor.” Then you want to use AI to “remove the AI flavor,” but it can't do it.
I start to wonder if this is a temporary limit or a fundamental limit. Sometimes I find myself asking AI to “be more like a human, make more mistakes,” and then suddenly realize—I need a human, and that human is myself.
Now I only use AI for brainstorming. For example, I wrote a “layoff emergency kit” email; I listed seven points first, then asked AI if I missed anything, and it added two points. After that, I wrote everything myself. I even hired a real human editor to finish proofreading within two hours, rather than outsourcing it to AI.
Philip Su:
Because he hasn't read The 4-Hour Workweek and won't outsource it to AI.
The Importance of the Human Touch
Host:
Greg writes The Pragmatic Engineer, and he hires researchers. Someone seriously suggested he use AI agents. The problem is that AI has already absorbed his content and is just feeding back what he wrote himself. His value comes from original interviews and research, which AI cannot do.
Philip Su:
You're talking about the importance of the human touch. Let me ask you a question: What year does PolyMarket predict AI will first win the World Series of Poker?
Host:
Assuming they allow AI to participate. I think when more than half of the participants are AI, they will start winning. It's a probability issue. But precisely because of that, they might never allow AI to participate.
Philip Su:
This question has two levels. One is the “reading people” factor in face-to-face poker; does AI need visual input? The second is, even if AI wins, will humans continue to play?
The Go world champion retired after AI defeated humans, saying “it's not fun anymore.” But chess participation actually increased. Poker might be different—it's a status good; the joy of beating others might be more important than beating a computer.
Host:
The founder of DeepMind (yes, talking about Demis!) was a chess prodigy as a child. He later realized that even if he played the best game of chess in history, the world hadn't changed, so he turned to AI research. Eventually leading to AlphaGo and promoting protein folding research.
Philip Su:
Whether it's replacement or augmentation, the work structure will change. This will force us to think: what exactly do we value humans for? Capitalism has only valued economic output for a long time, and AI will push this question to the forefront.
AI Has Shifted Value, Wallet Flows Have Changed
Host:
The related question is: Will AI drive the economy or destroy the economy? Rational people can only bet on “market shift,” not collapse. If it collapses, everything is meaningless. More likely, it's a value shift, a change in where the wallets flow.
Philip Su:
Another prediction I'm curious about is the first data center explosion incident. Extreme Luddite behavior is almost inevitable. Maybe that will be a symbolic moment: people admit the change is irreversible.
Host:
You actually planted this idea in the podcast? Okay, let's talk again in six months, maybe the world will be very different.
Philip Su:
When we chatted three months ago, I didn't expect the change to be this fast. See you in six months, maybe I'll come in a flying car.
Host:
The probability of a flying car isn't high. But where is “swarm coding” and “multi-agent management” going? Will I manage nine agents? Or manage one agent that manages nine? No one knows. But today's discussion was great.
Philip Su:
Excellent, thanks for having me.
Reference Link:
https://www.youtube.com/watch?v=HLxA1Gh-x3g