Jensen Huang's latest appearance on the Lex Fridman Podcast features a 2.5-hour deep dive covering nearly every topic from chip design to human consciousness.
Summary
1. CUDA almost destroyed NVIDIA. The decision to embed CUDA into GeForce GPUs back then caused company costs to skyrocket by 50%, and its market cap plummeted from $7–8 billion to $1.5 billion. Jensen Huang endured a decade of hardship before recovering, but this very move laid the foundation for CUDA's current dominance.
2. 60 direct reports, no one-on-ones. Huang's management team comprises over 60 members, including experts across GPU, CPU, optics, memory, and more. All issues are addressed through collective reasoning; there are no one-on-one meetings because NVIDIA fundamentally operates on "extreme co-design."
3. Four-layer Scaling Law. Pre-training, post-training, inference at test time, and Agentic systems—these four scaling laws are interlinked. Huang believes the expansion of intelligence ultimately hinges on one thing: compute power.
4. OpenClaw is the iPhone of tokens. Huang repeatedly refers to OpenClaw as the "iPhone of tokens," asserting its significance for agentic systems is comparable to what ChatGPT represented for generative AI.
5. "I believe we have already achieved AGI." By the standard of "creating a company valued over $1 billion," Huang contends AGI has already arrived. Of course, the success rate of having 100,000 agents attempt to build an NVIDIA is zero.
6. There will be more programmers. The definition of programming is changing. We are moving from 30 million programmers to 1 billion, because in the future every carpenter, accountant, and farmer will be able to "program" using natural language.
7. NVIDIA doesn't steal market share; it creates markets. Huang states NVIDIA's challenge is that "there is no one to steal share from"; almost all growth comes from entirely new markets.
8. No contract with TSMC in 30 years. Huang was once invited by Morris Chang to serve as TSMC's CEO, which he declined. Over three decades, the two companies have conducted hundreds of billions of dollars in business without a single written contract, relying entirely on trust.
9. Intelligence is a commodity; humanity is the superpower. All 60 people on Huang's team are smarter than him in their respective fields, yet he sits in the middle coordinating everyone. His view is: don't let the "commoditization of intelligence" bring anxiety; the word that should be elevated is "humanity."
10. Hopes to die on the job. Huang doesn't believe in "succession planning"; his approach is to transmit knowledge, judgment, and experience to everyone around him every single day.
Below is the full transcript of the interview:
CUDA: A Bet-the-Company Move
Lex: CUDA ultimately became an incredibly brilliant decision. But what was it like when you made that decision back then?
Jensen Huang:
That was probably the strategic decision closest to an "existential threat" that I've ever made.
CUDA expanded the range of applications we could accelerate, but the question arose: how do we attract developers? For developers to come to a platform, it has nothing to do with how flashy the technology is; the key is installed base. Installed base is what defines an architecture; everything else is secondary.
No architecture has been criticized more than x86, yet it survived. Meanwhile, those beautiful RISC architectures from the same period mostly failed.
Our thinking at the time was: GeForce is already selling millions of units annually, so let's just put CUDA into every single GeForce, regardless of whether users use it or not. Simultaneously, we pushed into universities, wrote books, created courses, and cultivated the ecosystem.
The problem was that CUDA significantly increased GPU costs. Our gross margin was only 35% at the time, and costs jumped by 50%, eating up all profits. The company's market cap fell from $7–8 billion to $1.5 billion.
In that range... we endured for a long time, climbing back little by little thanks to GeForce. I often say NVIDIA is a house built on GeForce. It was GeForce that delivered CUDA into the hands of every researcher, scientist, and student.
Lex: In those moments of existential crisis, how did you make such decisions?
Jensen Huang:
Ultimately, it's driven by curiosity. At a certain point, my reasoning system tells me so clearly that "this will definitely happen." Once I firmly believe it in my mind, you know, you go out and manifest a future. That future is so compelling that it cannot not happen. There will be immense pain in between, but you have to believe in what you believe.
And in terms of leadership, I never do those "year-end big reforms." No one-time massive layoffs, restructurings, or new logos. When I learn something new, I immediately start sharing it with those around me. Step by step, I shape everyone's cognitive framework. By the time I actually announce, "We are going all-in on deep learning," everyone in their hearts is thinking:
Why are you only saying this now?
Extreme Co-Design
Lex: You've pushed NVIDIA into a new era, moving from single-chip design to full rack-scale design. What is "extreme co-design," and what is the hardest part?
Jensen Huang:
The problem no longer fits inside a single machine. You add 10,000 computers, but you want it to be 1 million times faster. At this point, you have to重构 algorithms, partition pipelines, split data, and divide models. Everything becomes a bottleneck.
This is the problem of Amdahl's Law: if computation accounts for only 50% of the total workload, even if you accelerate computation by a million times, the overall speed only doubles.
So we have to use every technology available—CPU, GPU, networking, switches, power, cooling... otherwise, we can only scale linearly or rely on Moore's Law, which has already slowed down.
Lex: How large is your team?
Jensen Huang: I have over 60 direct reports.
Lex: Then how do you communicate?
Jensen Huang:
I don't do one-on-ones. We take a problem out, and everyone attacks it together. Because we are doing extreme co-design, the company itself is performing extreme co-design.
Even when discussing cooling solutions, the people doing networking, memory, and power are listening nearby. Anyone who wants to leave can leave, but if someone should be participating and isn't, I'll drag them over.
Lex: What is the scale of the Vera Rubin Pod?
Jensen Huang: 7 types of chips, 5 types of racks, 40 racks, 1.2 quadrillion transistors, nearly 20,000 NVIDIA dies, over 1,100 Rubin GPUs, 60 exaflops, and 10 PB/s of bandwidth. And that's just one Pod.
We produce about 200 such Pods every week.
Lex: With such complexity, is simplicity still your goal?
Jensen Huang:
The phrase I say most often is: Complexity should be just enough, while being as simple as possible.
Four Layers of Scaling
Lex: Do you still believe in Scaling Law?
Jensen Huang: Yes, and now there are even more scaling laws.
Lex: You listed four: pre-training, post-training, inference at test time, and Agentic. Which bottleneck worries you the most?
Jensen Huang:
Let's look back at the bottlenecks people thought existed.
In the pre-training phase, Ilya Sutskever made a remark like "we've run out of data," and the industry panicked, thinking AI had hit a ceiling. Of course, that was wrong. We will continue to expand training data volumes; a massive amount of data will be synthetic.
Many people don't understand synthetic data, but actually, most of the data we teach each other is already "synthetic"; it doesn't grow out of nature; humans create it. AI can now take real data, enhance it, and synthesize generate massive amounts of new data. So the bottleneck for training is no longer data; it has become compute power.
Then there is inference at test time. I remember people saying back then, "Inference is simple; inference chips can be made small and cheap, no need for big guys like NVIDIA."
This idea has always seemed illogical to me. Inference is thinking, and thinking is much harder than reading.
Pre-training is essentially memory and generalization; it's reading. But inference is thinking, reasoning, planning, searching, decomposing problems... how could the computational load be small?
Then there is the Agentic layer. An Agent can call tools, query databases, do research, and most importantly, it can generate a large batch of sub-agents.
Scaling NVIDIA by hiring more employees is much easier than scaling myself. So the next scaling law is Agentic Scaling, which is essentially "AI multiplied by AI."
When these four layers cycle, the expansion of intelligence ultimately boils down to one thing: compute power.
OpenClaw's iPhone Moment
Lex: Since last December, it seems people have suddenly awakened. Claude Code, Codex, OpenClaw—is something special happening?
Jensen Huang:
OpenClaw's significance for agentic systems is like ChatGPT's for generative AI. The reason it's on fire is that ordinary users can finally access it.
OpenClaw is the iPhone of tokens. It is the fastest-growing application in history. Straight up.
Lex: I have to admit, on my way here at the airport, I... spoke aloud to my laptop while programming. It was quite awkward because I was pretending to talk to a human colleague.
Jensen Huang:
In the future, the more likely scenario is that your AI will be bothering you constantly. Because it works so fast, it keeps reporting to you, "Done, what's next?" The person you chat with the most in the future should be your Claw, or your 🦞.
Safety Boundaries for Agents
Lex: What about safety? With such powerful technology, how do we ensure user data security?
Jensen Huang:
We immediately dispatched a team of security experts to create something called OpenShell, which has been integrated into OpenClaw. We also released NemoClaw.
The core principle is this: agentic systems have three capabilities: accessing sensitive information, executing code, and communicating externally. We guarantee that at any given time, at most two of these are open, but never all three simultaneously.
Within these two capabilities, we add enterprise-level access control and policy engines. This allows the Agent to get work done without going out of control.
From Accelerator to Computing Platform
Lex: How did NVIDIA evolve from a GPU accelerator company step-by-step into a computing platform company?
Jensen Huang:
The problem with accelerators is that the application domains are too narrow. Market size determines your R&D capability, and R&D capability determines how much impact you can have in the field of computing.
So we had to expand our reach, but we couldn't lose our specialization. There is a natural tension between these two words: the more general-purpose, the less specialized; the more specialized, the less general-purpose. We had to find an extremely narrow path in between.
Step one, we invented programmable pixel shaders. This was the first step towards programmability.
Step two, we put IEEE-compliant FP32 into the shaders. This step was significant because scientists who were running calculations on CPUs suddenly realized: this GPU has huge compute power, and now it complies with IEEE standards; my code can migrate over.
Then we put the C language on top of FP32, which we called Cg. Cg evolved all the way into CUDA.
Every step was about expanding the "aperture" of computing while preserving the core acceleration capability. It took over a decade, step by step.
Power and Supply Chain
Lex: What is the biggest bottleneck for AI scaling?
Jensen Huang:
Power is an issue. But this is also why we are pushing so hard on extreme co-design; we want to increase the number of tokens produced per watt per second by an order of magnitude every year. In the past 10 years, Moore's Law increased compute power by 100 times; we increased it by 1 million times.
There is one thing I really want to appeal for through your platform.
Our power grid is designed for worst-case scenarios. But 99% of the time, the worst case doesn't happen; the grid usually only runs at 60% of its peak. The remaining capacity just sits idle.
What I want to do is sign a new type of agreement with power companies: use their idle capacity during normal times, and during extreme weather, data centers automatically reduce power consumption. We can shift workloads to other places, or let computers run slower, using less energy, with just a slight reduction in service quality.
Lex: What is hindering this?
Jensen Huang:
It's a three-party problem. End customers demand data centers be online 100% of the time. Data center operators' contract negotiators sign six-nines SLAs; the CEO might not even know. Then power companies only offer one level of commitment.
If all three parties adjust slightly... there is a massive amount of existing idle power in the grid that can be used. This is the most readily available resource.
Lex: What about the supply chain? ASML's EUV, TSMC's CoWoS packaging, SK Hynix's HBM—do these bottlenecks keep you up at night?
Jensen Huang: I am dealing with them constantly. I fly to every supplier and explain our growth logic to the CEOs. A few years ago, I convinced several DRAM company CEOs to invest in HBM when HBM was only used in a tiny fraction of supercomputers; it sounded crazy. I also convinced them to adapt low-power mobile memory for supercomputers.
They all set revenue records in their 45-year company histories. This is also part of my job: to shape and inspire the entire upstream and downstream ecosystem.
Lex: Are you worried?
Jensen Huang: No.
Lex: Why?
Jensen Huang:
Because I told them what I need, they understood, they told me what they would do, and I trust they will deliver.
Musk and Colossus
Lex: What is your take on Musk and xAI building the Colossus supercomputer in Memphis in just 4 months?
Jensen Huang:
Musk has deep involvement in many fields, and at the same time, he is an excellent systems thinker. He questions everything: First, is this necessary? Second, must it be done this way? Third, must it take this long?
He compresses everything down to the absolute minimum necessary while retaining all capabilities required for the product. Extreme minimalism, system-level minimalism.
And he goes to the site personally. If there's a problem, he goes. "Let me see where the problem is." When you personally demonstrate that sense of urgency, everyone else becomes urgent too.
Lex: I've seen those meetings of his. He squats on the ground with engineers, studying how to plug cables into racks...
Jensen Huang: Exactly. At NVIDIA, we have a similar methodology called "Speed of Light." This isn't just about speed; it's my shorthand for "where are the physical limits?"
We compare everything to the speed of light: memory speed, compute power, power consumption, cost, time, manpower, manufacturing cycles. First figure out the physical limits, then do the engineering.
I don't like the mindset of "continuous improvement." Don't tell me "it takes 74 days now, we can shrink it to 72 days." I'd rather start from zero and ask: "Based on first principles, what is the fastest this can be done?" The answer might be 6 days. The remaining 68 days might make sense due to various compromises and cost optimizations. But at least you know where the gap lies.
Knowing that 6 days is possible makes the conversation from 74 to 6 much more effective.
China is a Nation of Builders
Lex: You recently visited China. How has China built so many world-class tech companies over the past decade?
Jensen Huang:
About half of the world's AI researchers are Chinese, and most are still in China. Their tech industry happened to coincide with the mobile cloud era, and their contribution is software. The children in this country have excellent math and science education and are quite familiar with modern software.
China is not a single economy. Mayors of provinces and cities compete with each other, which is why there are so many EV companies and so many AI companies. The competition is extremely fierce; those who survive are remarkable.
Additionally, their social culture is family first, friends second, company third. Engineers' brothers are in that company, classmates in this one; knowledge spreads extremely fast.
They have essentially always been open source. So it makes complete sense that they contribute significantly to the open-source community, because they think: "What do we have to hide?"
This is the country innovating fastest in the world today.
Lex: And in China, being an engineer is quite cool.
Jensen Huang:
They are a nation of builders. Our country's leaders are quite outstanding, but most are lawyers. Their country was built out of poverty, so most of their leaders are outstanding engineers.
Open Source and Nemotron
Lex: NVIDIA released the open-source Nemotron 3 Super, a 12-billion parameter MoE model. What is your vision for open source?
Jensen Huang:
To be a good AI computing company, we must understand how AI models evolve.
Nemotron 3 has something special: it's not just a pure Transformer; it also integrates SSM (State Space Models). We were doing conditional GANs and progressive GANs very early on, step-by-step leading to diffusion models. Doing fundamental research on model architectures allows us to see clearly how future computing systems should be designed. This is also part of extreme co-design.
There are three reasons for open sourcing.
First, we indeed need world-class models as products, and these should be proprietary. But at the same time, we also want AI to diffuse into every industry, every country, every researcher, and every student. If everything is closed source, research and innovation become difficult to build upon.
Second, NVIDIA has the scale, the skills, and the motivation to keep doing this. We can activate every industry to join the AI revolution.
Third, AI is far more than just language. These AIs will use tools, call sub-models, and those sub-models might be trained on biology, chemistry, physics, fluid dynamics. We don't make cars, but we want to ensure every car company can use the best models. We don't do drug discovery, but I want to ensure Eli Lilly can own the world's best biological AI system.
Lex: And you are truly open source; weights, data, methods, everything is public.
Jensen Huang: Yes, the model is open source, weights are open source, data is open source, and even how it was made is open source. This point really should be known by more people.
TSMC: Trust
Lex: What is your relationship with TSMC like?
Jensen Huang:
The biggest misunderstanding about TSMC is thinking they only have technology. As if someone made an equally good transistor, and TSMC would be done for.
What's truly remarkable about them is that manufacturing management system. Coordinating the dynamically changing needs of hundreds of companies globally, starting and stopping wafer runs, emergency add-ons, customers changing, capacity changing, the whole world changing, yet they always maintain high throughput, high yield, low cost, and on-time delivery.
Then there is the culture. They simultaneously achieve two things that are usually contradictory: cutting-edge technology and top-tier customer service.
Finally, there is trust. Thirty years, hundreds of billions of dollars in business, without a single contract. This trust is remarkable.
Lex: In 2013, Morris Chang invited you to be TSMC's CEO?
Jensen Huang:
It's true. I was deeply honored. Morris Chang is one of the most respected executives I've ever met and a good friend. But the work at NVIDIA is too important; I had already seen in my heart what NVIDIA would become. This is my responsibility, and only my responsibility. So I declined.
Not because the opportunity wasn't good enough. It was an incredible opportunity. But I couldn't accept it.
NVIDIA's Moat
Lex: What is NVIDIA's biggest moat?
Jensen Huang:
CUDA's installed base. This is our most important asset.
CUDA wasn't made successful by three people; it was made successful by 43,000 people, plus millions of developers who trust us. They built their software stacks on top of CUDA.
Think from a developer's perspective: if I support CUDA, tomorrow it will be 10x faster, averaging just a six-month wait. And my code can run on hundreds of millions of devices, in every cloud, every industry, every country. I 100% trust NVIDIA will maintain and improve CUDA forever.
Add all these up, and if I were a developer, I would be CUDA-first.
GPUs in Space
Lex: What do you think about the idea of building data centers in space?
Jensen Huang:
NVIDIA GPUs are actually already in space. I was quite surprised when I found out; I originally wanted to make a big announcement, maybe put a little astronaut suit on the GPU.
Those satellites have high-resolution imaging systems continuously scanning the Earth, achieving centimeter-level real-time remote sensing. The data volume is at the PB level; it can't all be transmitted back to Earth. So AI must process it on the edge locally, discarding what's unnecessary or unchanged, keeping only key information.
If placed in polar orbit, there is solar power 24 hours a day. But in space, there is no conduction, no convection; heat dissipation relies solely on radiation. Fortunately, space is big enough; just set up a few huge heat sinks.
Lex: How far is this idea from realization? 5 years? 10 years?
Jensen Huang:
I'm pragmatic. I go after the biggest immediate opportunities first, while sending engineers to study space issues: how to deal with radiation? How to handle performance degradation? How to achieve redundancy and fault tolerance? How to ensure computers don't break in space, just slow down?
But right now, what I most want to do is utilize that idle power in the grid first. That is the most readily available resource.
Token Factories
Lex: Could NVIDIA be worth $10 trillion?
Jensen Huang:
Let me explain why NVIDIA's growth is almost inevitable.
Computing has undergone a fundamental change. In the past, computers were essentially "warehouses"; we pre-recorded content, stored it as files, and used retrieval systems to find it. Now, AI computers are "factories"; they understand context in real-time and generate tokens in real-time.
Warehouses don't make much money; a factory's output is directly tied to revenue.
Moreover, the commodity this factory produces, tokens, is becoming tiered, just like the iPhone: there are free tokens, mid-tier tokens, and premium tokens. Someone is willing to pay $1,000 per million tokens; it's not a question of "if," but "when."
NVIDIA's challenge lies in imagination.
I have no one to steal share from. Almost all the growth we talk about comes from a market that doesn't exist yet. It is indeed not easy for the outside world to imagine. But I have time; I will continue to reason, continue to narrate, and every GTC will make it more real.
Gaming and DLSS 5
Lex: DLSS 5 has sparked some controversy. Players worry games will become AI slop. What's your take?
Jensen Huang:
Honestly, I don't like AI slop either. AI-generated content is increasing, looking more and more alike; it's all pretty, but lacks personality. I understand the players' feelings.
But what DLSS 5 does is different. It is 3D-guided and constrained by ground-truth data. Artists decide the geometry; we are 100% faithful to the geometric structure of every frame. It is constrained by textures and by the artist's intent.
Enhance, but do not change.
And because the system is open, you can train your own models; in the future, you might even control style via prompts: "I want cartoon rendering style," "I want this art style." All these are tools provided for artists; they can choose to use them or not.
Players might think we are forcibly applying post-processing after the game ships. But actually, DLSS is integrated with artists; it is an AI tool for creators.
Lex: What do you think is the greatest game of all time?
Jensen Huang: Doom.
In terms of cultural impact and industry significance, Doom transformed the PC from an office automation tool into a gaming device; the significance of this shift is huge.
From a gaming technology perspective, I would say Virtua Fighter.
Lex: I personally love The Elder Scrolls V: Skyrim; although it's an old game, people keep making mods...
Jensen Huang: We made the RTX Mod, a modding tool that allows the community to inject the latest rendering technologies into old games.
And don't forget, GeForce remains our number one marketing tool to this day. People get to know NVIDIA by playing games in their teens. Later, in college, they start using CUDA, then Blender, then Autodesk.
AGI Has Arrived?
Lex: Let me ask with a definition: an AI system capable of creating, developing, and operating a tech company valued over $1 billion. How far are we from this AGI?
Jensen Huang:
I believe it has already been achieved.
Lex: What?
Jensen Huang:
You said $1 billion, and didn't say it has to operate forever.
It is entirely possible that a Claude creates a web service, some small app, billions of people use it once, paying 50 cents each, and then it shuts down quickly. Most of those viral websites from the internet era weren't more complex than what OpenClaw can generate today.
Lex: Your words will excite many people.
Jensen Huang:
Go look at China; there are already a whole bunch of people teaching their Clauses to find jobs, do work, and make money. I wouldn't be surprised if some digital influencer or some Tamagotchi-style app suddenly explodes for a few months and then disappears.
But having 100,000 agents try to build an NVIDIA... the success rate is zero.
There Will Be More Programmers
Lex: Do you think the number of programmers will increase or decrease?
Jensen Huang:
It will increase. The definition of programming has changed.
Programming today is writing specifications. How many people can achieve "telling the computer what to build"? I think we have just expanded from 30 million to 1 billion.
In the future, every carpenter will be a programmer. And a carpenter with AI is also an architect. The value they can provide to clients multiplies several times. Every accountant is also a financial analyst and wealth advisor. All professions are elevated.
Lex: What about the example of radiologists?
Jensen Huang:
The first job AI researchers said would disappear was radiologists. Computer vision indeed surpassed humans in 2019 and 2020. But the number of radiologists actually increased; there is still a global shortage.
Because you realize the "purpose" of a radiologist is to diagnose diseases and help patients, not the task of "looking at scans" itself. AI made looking at scans faster, so they can view more scans, diagnose more patients, hospitals make more money, and more radiologists are needed.
The purpose of your job and the tool you use to do that job are related, but they are not the same thing.
The same applies to NVIDIA software engineers. I want them to solve problems; I don't care how many lines of code they write.
Don't Be Afraid, Just Use It
Lex: Many people are anxious about their jobs.
Jensen Huang:
My way of handling anxiety, which I actually just talked about, is: decompose the problem, figure out what can be done, and once done, the anxiety is gone.
If I were hiring a new graduate today, between two candidates, one who doesn't know AI and one who is proficient in AI, I'd choose the latter. Carpenters, electricians, farmers, pharmacists—they should all go use AI and see how it can improve their work.
Lex: And you can directly ask AI: "I don't know how to use AI."
Jensen Huang:
Exactly, that's the most amazing thing about AI. You can't walk up to Excel and say "I don't know how to use Excel."
You're done.
But AI will say: "Okay, let me teach you."
Leadership and Suffering
Lex: You've said your success comes from enduring more hardship than anyone else. How do you cope with such immense pressure?
Jensen Huang:
I am fully aware of NVIDIA's importance to the United States. We contribute significant tax revenue, establish technological leadership; this concerns all aspects of national security. I also know there are many ordinary investors, teachers, police officers, who became millionaires because they bought NVIDIA stock.
My way of coping is decomposition. What happened? What changed? What's hard? What can I do?
Break big problems into small pieces, then solve them one by one, or assign them to people who can solve them. Whatever worries me, I tell someone who can do something about it. Once spoken, the burden is shared.
Then there is forgetting.
You have to learn to forget. You can't remember everything, carry everything. Decompose the problem, share the burden, and then forget it.
It's like top athletes. The last point is in the past; only care about the next point.
Lex: You've said if you had known how hard it would be, you wouldn't have founded NVIDIA, but...
Jensen Huang:
Anything worth doing should be like this.
There is a superpower called the "child mindset." My first reaction to almost everything is:
How hard can it be?
No one has done it, the scale is huge, it will cost hundreds of billions. You just think: "How hard can it be?"
You shouldn't simulate all the setbacks and humiliations in advance. You should walk into new experiences with the mindset that "everything will go smoothly." When setbacks do come, they will surprise you, but you need resilience, you need to forget, and you need to keep walking.
As long as my assumptions about the future haven't changed, my judgment on the output won't change. So just keep walking.
Intelligence and Humanity
Lex: Do you think there is anything in human consciousness that chips can never replicate?
Jensen Huang:
I'm not sure if chips get nervous.
AI can recognize and understand emotions, but my chips don't "feel" those things. And those feelings—anxiety, excitement, fear—profoundly affect human performance. Two people receiving exactly the same information may produce completely different results, not because of a different algorithm, but purely because they "feel different."
Lex: Then how do you view the word intelligence?
Jensen Huang:
The word intelligence is elevated too high.
Of the 60 people around me, every single one is smarter than me in their respective fields; they have higher degrees, better schools, deeper research. But I sit among them, coordinating everyone. You have to ask yourself: why does a dishwasher... deserve to sit among a group of super-human experts?
Intelligence is functional. Humanity is not. Humanity is a much bigger word.
Our society packs too many things into the single word "intelligence." But a person's life is far more than one word. My experience shows that being lower on the intelligence curve than everyone around you does not prevent you from being the most successful one.
Don't let the democratization and commoditization of intelligence bring you anxiety. You should be inspired.
Dying on the Job
Lex: Have you thought about your own death?
Jensen Huang:
I really don't want to die. I have a great family, a great life, and extremely important work. This isn't a "once in a lifetime" experience, because that implies many people have experienced it. This is a "once in human history" experience.
I don't believe in succession planning. Not because I think I'm immortal. If you are truly worried about succession, what should you do now? The answer is: transmit knowledge every single moment.
Anything I learn doesn't stay on my desk for more than a second. Before I've even fully digested it, I'm already pointing it out to someone else.
The ending I hope for is to die on the job, preferably instantly, without long-term suffering.
Lex: What gives you hope?
Jensen Huang:
I have always had immense confidence in human kindness, generosity, and empathy. Sometimes more than warranted, occasionally taken advantage of, but this has never changed me.
Expecting the end of disease is reasonable. Expecting a significant reduction in pollution is reasonable. Expecting travel at the speed of light is also reasonable.
Not long-distance, but short-distance. How to achieve it? I will soon send a humanoid robot onto a spaceship. It will continuously improve during the flight. When the time is right, my consciousness, everything in my inbox, everything I've said, everything I've done, will have been uploaded to the internet. Then it will be sent at the speed of light to catch up with my robot.
Lex: This is brilliant. How long until we understand biological machines?
Jensen Huang: It's right around the corner. About five years.
Lex: And then the human brain, theoretical physics...
Jensen Huang:
Explaining consciousness. That would be too cool.
Lex Fridman Podcast #494 Full Version:
Official Full Transcript: https://lexfridman.com/jensen-huang-transcript
NVIDIA Official Website: https://nvidia.com