Nvidia CEO Jensen Huang weighs in on the metaverse, blockchain, and chip shortage

Enterprise

Elevate your enterprise data technology and strategy at Transform 2021.


Conversations with Nvidia CEO Jensen Huang are always blunt and illuminating because he still likes to have freewheeling chats with the press. During the recent online-only Computex event, he held an briefing with the press where he talked about the company’s recent announcements and then took a lot of questions.

I asked him about the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. And he gave a detailed answer. Huang addressed a wide range of issues. He talked about Nvidia’s pending bid to buy Arm for $40 billion, as well as Nvidia’s effort to create Grace, an Arm-based CPU.

He also addressed progress on Nvidia’s own Omniverse, dubbed a “metaverse for engineers.” Huang talked about Nvidia’s presence in the Chinese market, the company’s efforts to discourage miners from buying all of its GPUs, Nvidia’s data processing units (DPUs), and Moore’s Law’s future and building fabs, competition from Advanced Micro Devices in graphics processing units (GPUs), and Nvidia’s reaction to the global semiconductor shortage.

I was part of a group of journalists who quizzed Huang. Here’s an edited transcript of the group interview.

Above: Nvidia GeForce RTX 3080 Ti is its new card.

Image Credit: GamesBeat

Jensen Huang: Today I’m coming to you from Nvidia’s new building, called Voyager. This is our new facility. It was started about 2-and-a-half years ago. For the last year-and-a-half, I’ve not seen it. Today’s my first day on campus. Literally, for our event today, this is my first day on campus. It’s beautiful here. This facility is going to be the home of 3,500 Nvidians. It’s designed as a city inside a building. If you look behind me, it’s a sprawling city, and it’s a very large open space. It’s largely naturally lit. In fact, right now, as we speak, there’s a light in front of me, but everything behind us is barely lit. The reason for that is because there are all these panels in the sky that let light in.

We simulated this entire building using raytracing on our supercomputer DGX. The reason we did that is so we can balance the amount of light that comes in and the amount of energy, or otherwise heat, that we have to remove with air conditioning. The more light you bring in, the more AC you have to use. The less light you bring in, the more lighting you have to use. We have to simulate that fine balance.

The roof of this building is angled in just the right way such that the morning sun doesn’t come straight in, and the afternoon sun doesn’t come straight in. The slope of the roof line, the slope of the windows along the side, you’ll see everything was designed in such a way as to balance between natural light, which is comfortable for the eyes, and not having to use as much air conditioning as otherwise necessary. At the moment, no AC at all. This is the first day we’ve been in here. It’s incredibly comfortable.

Using a supercomputer to simulate architecture, I think this is going to happen for all buildings in the future. You’re going to design a building completely in virtual reality. The building is also designed to accommodate many robots. You’ll notice the hallways are very wide. In the future we imagine robots roaming the hallways carrying things to people, but also for telepresence, virtual presence. You can upload yourself into a robot and sit at your desk in your VR or AR headset and roam around the campus.

You’re the first in the world to be here. Welcome all of you, and I thank you for joining me today. I also want to send my thoughts and recognize that in Taiwan, COVID cases are growing again. I’m very sorry about that. I hope all of you are safe. I know that Taiwan was so rigorous in keeping the infection rates down, and so I’m terribly sorry to see it go up now. I know they can get it under control, and soon all of us will be able to see each other in person.

GeForce ecosystem

Let me say a couple of words about the announcement. We announced two basic things. In GeForce gaming, where Taiwan is the central hub of where our add-in card partners and many of our leading laptop partners are based, and the home of, the epicenter if you will, the GeForce ecosystem. It all starts there. It’s manufactured and assembled and integrated and it goes to the market through our add-in card partners and laptop builders.

Above: Nvidia’s RTX is used in more than 130 games.

Image Credit: Nvidia

The GeForce business is doing incredibly well. The invention of RTX has been a home run. It has reset and redefined computer graphics, completely reinvented modern computer graphics. It’s a journey that started more than 10 years ago, and a dream that started 35 years ago. It took that long for us to invent the possibility of doing realtime raytracing, which is really hard to do. It wasn’t until we were able to fuse our hardware accelerated raytracing core with the Tensor core GPU, AI processing, and a bunch of new rendering algorithms, that we were able to bring realtime raytracing to reality. RTX has reinvented computer graphics in the marketplace. RTX 30, the 30 family, the Ampere architecture family, has been fantastic.

We announced several things. We announced that we upgraded the RTX 30 family with the 3080Ti and the 3070Ti. It’s our regularly planned once per year upgrade to our high end GPUs. We also, with the partnership with all of our laptop partners, our AICs, launched 140 different laptops. Our laptop business is one of the fastest growing businesses in our company. This year we have twice as many notebooks going into the marketplace as we did with Turing, our last generation, RTX 20. This is one of the fastest growing businesses. The laptop business is the fastest growing segment of PCs. Nvidia laptops are growing at seven times the rate of the overall laptop business. It gives a sense of how fast RTX laptops are growing.

If you think about RTX laptops as a game console, it’s the largest game console in the world. There are more RTX laptops shipped each year than game consoles. If you were to compare the performance of a game console to an RTX, even an RTX 3060 would be 30-50 percent faster than a PlayStation 5. We have a game console, literally, in this little thin notebook, which is one of the reasons it’s selling so well. The same laptop also brings with it all of the software stacks and rendering stacks necessary for design applications, like Adobe and AutoDesk and all of these wonderful design and creative tools. The RTX laptop, RTX 3080Ti, RTX 3070Ti, and a whole bunch of new games, that was one major announcement.

Nvidia in the enterprise

The second thrust is enterprise, data centers. As you know, AI is software that can write software. Using machines you can write software that no human possibly can. It can learn from an enormous amount of data using an algorithm in an approach called deep learning. Deep learning isn’t just one algorithm. Deep learning is a whole bunch of algorithms. Some for image recognition, some for recognizing 2D to 3D, some for recognizing sequences, some for reinforcement learning in robotics. There’s a whole bunch of different algorithms that are associated with deep learning. But there’s no question that we can now write software that we’ve not been able to write before. We can automate a bunch of things that we never thought would be possible in our generation.

One of the most important things is natural language understanding. It’s now so good that you can summarize an entire chapter of a book, or the whole book. Pretty soon you can summarize a movie. Watch the movie, listen to the words, and summarize it in a wonderful way. You can have questions and answers with an NLU model.

AI has made tremendous breakthroughs, but has largely been used by the internet companies, the cloud service providers and internet services. What we announced at GTC initially a few weeks ago, and then what we announced at Computex, is a brand new platform that’s called Nvidia Certified AI for Enterprise. Nvidia Certified systems running a software stack we call Nvidia AI Enterprise. The software stack makes it possible to achieve world class capabilities in AI with a bunch of tools and pre-trained AI models. A pre-trained AI model is like a new college grad. They got a bunch of education. They’re trained. But you have to adapt them into your job and to your profession, your industry. But they’re pre-trained and really smart. They’re smart at image recognition, at language understanding, and so on.

We have this Nvidia AI Enterprise that sits on top of a body of work that we collaborated on with VMware. That sits on top of Nvidia Certified servers from the world’s leading computer makers, many of them in Taiwan, all over the world, and these are high-volume servers that incorporate our Ampere generation data center GPUs and our Mellanox BlueField DPUs. This whole stack gives you a cloud native–it’s like having an AI cloud, but it’s in your company. It comes with a bunch of tools and capabilities for you to be able to adapt it.

How would you use it? Health care would use it for image recognition in radiology, for example. Retail will use it for automatic checkout. Warehouses and logistics, moving products, tracking inventory automatically. Cities would use these to monitor traffic. Airports would use it in case someone lost baggage, it could instantly find it. There are all kinds of applications for AI in enterprises. I expect enterprise AI, what some people call the industrial edge, will be the largest opportunity of all. It’ll be the largest AI opportunity.

With the overall trend, what all of these announcements show is that Nvidia accelerated computing is gaining momentum. We had our company grow a lot last year, as many of you know. This last quarter we had a record quarter across all our product lines. We expect the next quarter to be another great quarter, and the second half also to be a great growth second half. It’s very clear that the world of computing is changing, that accelerated computing is making a contribution, and one of the most important applications is AI.

The metaverse

Above: BMW Group is using Nvidia’s Omniverse to build a digital factory that will mirror a real-world place.

Image Credit: Nvidia

Question: I wonder about your latest thoughts on the metaverse and how we’re making progress toward that. Do you see steps happening in the process of creating the metaverse?

Huang: You’ve been talking about the metaverse for some time, and you’ve had interest in this area for a long time. I believe we’re right on the cusp of it. The metaverse, as you know, for all of you who are learning about it and hearing about it, it’s a virtual world that connects to the world that we live in. It’s a virtual world that is shared by a lot of people. It has real design. It has a real economy. You have a real avatar. That avatar belongs to you and is you. It could be a photoreal avatar of you, or a character.

In these metaverses, you’ll spend time with your friends. You’ll communicate, for example. We could be, in the future, in a metaverse right now. It will be a communications metaverse. It won’t be flat. It’ll be 3D. We’ll be able to almost feel like we’re there with each other. It’s how we do time travel. It’s how we travel to far places at the speed of light. It could simulate the future. There will be many types of metaverses, and video games are one of them, for example. Fortnite will eventually evolve into a form of metaverse, or some derivative of it. World of Warcraft, you can imagine, will someday evolve into a form of metaverse. There will be video game versions.

There will be AR versions, where the art that you have is a digital art. You own it using NFT. You’ll display that beautiful art, that’s one of a kind, and it’s completely digital. You’ll have our glasses on or your phone. You can see that it’s sitting right there, perfectly lit, and it belongs to you. We’ll see this overlay, a metaverse overlay if you will, into our physical world.

In the world of industry, the example I was giving earlier, this building exists fully in virtual reality. This building completely exists in VR. We designed it completely digitally. We’re going to build it out so that there will be a digital twin of this very physical building in VR. We’ll be able to simulate everything, train our robots in it. We can simulate how best to distribute the air conditioning to reduce the energy consumption. Design certain shapeshifting mechanisms that block sunlight while letting in as much light as possible. We can simulate all of that in our digital twin, our building metaverse, before we deploy anything here in the physical world. We’ll be able to go in and out of it using VR and AR.

Those are all pieces that have to come together. One of the most important technologies that we have to build, for several of them–in the case of consumers, one of the important technologies is AR, and it’s coming along. AR is important. VR is becoming more accessible and easier to use. It’s coming along. In the case of the industrial metaverse, one of the most important technologies is physically based, physically simulated VR environments. An object that you design in the metaverse, if you drop it to the ground, it’ll fall to the ground, because it obeys the laws of physics. The lighting condition will be exactly as we see. Materials will be simulated physically.

These things are essential components of it, and that’s the reason why we invented the Nvidia Omniverse. If you haven’t had a chance to look at it, it’s so important. It’s one of our most important bodies of work. It combines almost everything that Nvidia has ever built. Omniverse is now in open beta. It’s being tested by 400 companies around the world. It’s used at BMW to create a digital factory. It’s used by WPP, the world’s largest advertising agency. It’s used by large simulation architects. Bentley, the world’s largest designer of large infrastructure, they just announced that they’ll use Omniverse to create digital twins. Omniverse is very important work, and it’s worth taking a look at.

Chinese market

Above: Nvidia GeForce RTX 3080 Ti graphics card.

Image Credit: Nvidia

Question: You mentioned the opportunities ahead of Nvidia. The recent trend in China is that China has seen a lot of GPU startups emerge in the last one or two years. It’s received billions in funding from VCs. China has a lot of reasons to develop its own Nvidia in the next few years. Are you concerned that your Chinese customers are hoping to develop a rival for you in this market?

Huang: We’ve had competition, intense competition, from companies that are gigantic, since the founding of our company. What we need to do is we need to make sure we continue to run very fast. Our company is able to invest, in a couple of years, which is one generation, $10 billion to do one thing. After investing in it for 30 years. We have a great deal of expertise and scale. We have the ability to invest greatly. We care deeply about this marketplace. We’re going to continue to run very fast. Our company’s position, of course, is not certain. We have to take all of the competition, respect them, and take them seriously, and recognize that there are many places where you could contribute to AI. We just have to keep on running hard.

However, here’s my prediction. Every data center and every server will be accelerated. The GPU is the ideal accelerator for these general purpose applications. There will be hundreds of millions of data centers. Not just 100 data centers or 1,000 data centers, but 100 million. The data centers will be in retail stores, in 5G base stations, in warehouses, in schools and banks and airports. They’ll be everywhere. Street corners. They will all be data centers. The market opportunity is quite large. This is the largest market opportunity the IT industry has ever seen. I can understand why it inspires so many competitors. We just need to continue to do our best work and run as fast as we can.

Question: Are you also worried about the government interfering in this space?

Huang: I believe that we add value to the marketplace. Nvidia’s position in China, and our contribution to China, is good. It has helped the internet companies, helped many startups, helped researchers developing AI. It’s wonderful for the gaming business and the design business. We make a lot of contributions to the IT ecosystem in China. I think the government recognizes that. My sense is that we’re welcome in China and we’ll continue to work hard to deserve to be welcome in China, and every other country for that matter. We’ll do that.

China’s game makers

Above: Nvidia’s GeForce RTX 3050 will power new laptops.

Image Credit: Nvidia

Question: We’ve seen a few keynotes about games, and we’ve seen more and more Chinese games, games developed by Chinese companies. How do you position or commend Chinese developers? What does Nvidia plan to do to support the Chinese gaming ecosystem?

Huang: We do several things that developers love. The first thing is our installed base is very big. If you’re a developer and you develop on Nvidia’s platform, because all of our platform, all of our GeForce, are compatible–we work so hard to make sure that all of the software is high quality. We maintain and continue to update the software, to keep tuning every single GPU for every game. Every GPU, every game, we’re constantly tuning. We have a large group of engineers constantly studying and looking for ways to improve. We use our platform called GeForce Experience to update the software for the gamer.

The first thing is our installed base is very large, then. Our software quality is very good. But very important, one of the things that content developers, game developers love is our expertise in computer graphics, working with them to bring beautiful graphics to their games is excellent. We’ve invented so many algorithms. We invented programmable shading, as you know. This is almost 20 years ago, we invented the programmable pixel and vertex shaders in the GPU. We invented RTX. We teach people how to use programmable shading to create special effects, how to use RTX to create raytracing and ambient occlusion and global illumination, really beautiful computer graphics. We have a lot of expertise and a lot of technology that we can use to work with gamers to incorporate that into their games so that they’re as beautiful as possible.

When it’s done, we have fantastic marketing. We have such a large reach, we can help the developers promote their games all over the world. Many of the Chinese developers would like to reach the rest of the world, because their games are now triple-A quality, and they should be able to go all over the world. There are several reasons why game developers enjoy working with us, and those are the reasons.

Nvidia’s Grace Arm CPU

Above: Nvidia’s Grace CPU for datacenters is named after Grace Hopper.

Image Credit: Nvidia

Question: At GTC you announced Grace, which seems like a big project. An ARM CPU is hard to implement. Do you think ARM can overtake the x86 processor in the server market in the future?

Huang: First of all, I think the future world is very diversified. It will be x86. It will be ARM. It will be big CPUs, small CPUs, edge CPUs, data center CPUs, supercomputing CPUs, enterprise computing CPUs, lots of CPUs. I think the world is very diversified. There is no one answer.

Our strategy is one where we’ll continue to support the x86 CPUs in the markets we serve. We don’t serve every market. We serve high-performance computing. We serve AI. We serve computer graphics. We serve the markets that we serve. For the markets that we serve, not every CPU is perfect, but some CPUs are quite ideal. Depending on the market, and depending on the application, the computing requirements, we will use the right CPU.

Sometimes the right CPU is Intel x86. For example, we have 140 laptops. The vast majority of them are Intel CPUs. We have DGX systems. We need a lot of PCI Express. It was great to use the AMD CPU. In the case of 5G base stations, Marvell’s CPU is ideal. They’re based on ARM. Cloud hyperscale, Ampere Computing’s Altra CPU is excellent. Graviton 2 is excellent. It’s fantastic. We support those. In Japan, Fujitsu’s CPU is incredible for supercomputing. We’ll support that. Different types of CPUs are designed for different applications.

The CPU we designed has never been designed before. No CPU has ever been able to achieve the level of memory bandwidth and memory capacity that we have designed for. It is designed for big data analytics. It’s designed for the state of the art in AI. There are two primary models, or AI models, that we are very interested in advancing, because they’re so important. The first one is the recommender system. It’s the most valuable piece of software, approach of software, that the world has ever known. It drives all the internet companies, all the internet services. The recommender system is very important, incredibly important science. It’s designed for that. The second is natural language understanding, which requires a lot of memory, a lot of data, to train a very smart AI for having conversational AI, answering questions, making recommendations, and so on.

These two models are probably, my estimation, the most valuable software in the world today. It requires a very large machine. We decided that we would design something just for those types of applications, where big AI is necessary. Meanwhile, there are so many different markets and edges and enterprises and this and that. We’ll support the CPUs that are right for them. I believe the future is about diversity. I believe the future is about variability and customization and those kinds of things. ARM is a great strategy for us, and x86 will remain a great strategy for us.

Arm deal

Above: Simon Segars is CEO of Arm.

Image Credit: Arm

Question: You recently had the earnings call where you talked a bit about the ARM deal, and Simon Segar’s keynote mentioned it as well, that he’s looking forward to the deal, combining their ecosystem plus all the AI capabilities of Nvidia. Is there any update about the next steps for you guys?

Huang: We’re going through the regulatory approval. It takes about 18 months. The process typically goes U.S., then the EC, and then China last. That’s the typical journey. Mellanox took about 18 months, or close to it. I expect this one to take about 18 months. That makes it early next year, or late this year.

I’m confident about the transaction. The regulators are looking for, is this good for competition? Is it pro-competitive? Does it bring innovation to the market? Does it give customers more choice? Does it give customers more offerings and more choice? You can see that on first principles, because our companies are completely complementary–they build CPUs, we build GPUs and DPUs. They don’t build GPUs. Our companies are complementary, and so by nature we’ll bring innovations that come as a result of coming together offering complementary things. It’s like ketchup and mustard coming together. It’s good for innovation.

Question: You mentioned that the acquisition will increase competition. Can you explain which areas you see for future competition? We see that AMD and also other players are starting to compete in GPUs, CPUs, and data centers.

Huang: First of all, it’s pro-competitive because it brings customers more choice. If we combine Nvidia and ARM, ARM’s R&D scale will be much larger. As you know, ARM is a big company. It’s not a small company. But Nvidia is much bigger. Our R&D budget is many times larger than ARM’s. Our combination will give them more R&D scale. It will give them technology that they don’t have the ability to build themselves, or the scale to build themselves, like all of the AI expertise that we have. We can bring those capabilities to ARM and to its market.

As a result of that, we will offer ARM customers more technology choice, better technology, more advanced technology. That ultimately is great for competition, because it allows ARM’s licensees to create even better products, more vibrant products, better leading-edge technology, which in the end market will give the end market more choice. That’s ultimately the fundamental reason for competition. It’s customer choice. More vibrant innovation, more R&D scale, more R&D expertise brings customers more choice. That, I think, is at the core of it.

For us, it brings us a very large ecosystem of developers, which Nvidia as a company, because we’re an accelerated computing company–developers drive our business. And so with 15 million more developers — we have more than 30 million developers today — those 15 million developers will develop new software that ultimately will create value for our company. Our technology, through their channel, creates value for their company. The combination is a win-win.

Semiconductor shortage

Above: Jensen Huang of Nvidia stands in a virtual environment.

Image Credit: Nvidia

Question: I’m interested in your personal thoughts on the–we’ve had all the supply chain constraints on one hand, and then on the other hand a demand surplus when it comes to the crypto world. What’s your feeling? Is it like you’re making Ferraris and people are just parking them in the garage revving the engine for the sake of revving it? Do you see an end to proof of work blockchain in the future that might help resolve that issue? What are your thoughts on the push-pull in that space?

Huang: The reason why Ethereum chose our GPUs is because it’s the largest network of distributed supercomputers in the world. It’s programmable. When Bitcoin first came out, it used our GPU. When Ethereum came out it used our GPU. When other cryptocurrencies came out in the beginning, they established their credibility and their viability and integrity with proof of work using algorithms that run on our GPUs. It’s ideal. It’s the most energy efficient method, the most performant method, the fastest method, and has the benefit of very large distributed networks. That’s the origins of it.

Am I excited about proof of stake? The answer’s yes. I believe that the demand for Ethereum has reached such a high level that it would be nice for either somebody to come up with an ASIC that does it, or for there to be another method. Ethereum has established itself. It has the opportunity now to implement a second generation that carries on from the platform approach and all of the services that are built on top of it. It’s legitimate. It’s established. There’s a lot of credibility. It works well. A lot of people depend on it for DeFi and other things. This is a great time for proof of stake to come.

Now, as we go toward that transition, it’s now established that Ethereum is going to be quite valuable. There’s a future where the processing of these transactions can be a lot faster, and because there are so many people built on top of it now, Ethereum is going to be valuable. In the meantime there will be a lot of coins mined. That’s why we created this new product called CMP. CMP is right here. It looks like this. This is what a CMP looks like. It has no display connectors, as you can probably see.

The CMP is something we learned from the last generation. What we learned is that, first of all–CMP does not yield to GeForce. It’s not a GeForce put into a different box. It does not yield to our data center. It does not yield to our workstations. It doesn’t yield to any of our product lines. It has enough functionality that you can use it for crypto mining.

The $150 million we sold last quarter and the $400 million we’re projecting to sell this quarter essentially increased supply of our company by half a billion dollars. They were supply that we otherwise couldn’t use, and we diverted good yielding supply to GeForce gamers, to workstations and such. The first thing is that CMP effectively increases our supply. CMP also has the after benefit of not being able to be resold secondhand to GeForce customers because it doesn’t play games. These things we learned from the last cycle, and hopefully we can take some pressure off of the GeForce gaming side, getting more GeForce supply to gamers.

Above: Perlmutter, the largest NVIDIA A100-powered system in the world.

Image Credit: Nvidia

Question: There’s a shortage problem in the semiconductor market as a whole. The price of GPU products is getting higher. What do you think it will take to stabilize that price?

Huang: Our situation is very different than other people’s situations, as you can imagine. Nvidia doesn’t make commodity components. We’re not in the DRAM business or the flash business or the CPU business. Our products are not commodity-oriented. It’s very specific, for specific applications. In the case of GeForce, for example, we haven’t raised our price. Our price is basically the same. We have an MSRP. The channel end market prices are higher because demand is so strong.

Our strategy is to alleviate, to reduce the high demand that is caused by crypto mining, and create a special product, the CMP, directly for the crypto miners. If the crypto miners can buy, directly from us, a large volume of GPUs, and they don’t yield to GeForce, so they cannot be used for GeForce, but they can be used for crypto mining, it will discourage them from buying from the open market.

The second reason is we introduced new GeForce configurations that reduce the hash rate for crypto mining. We reduced the performance of our GPU on purpose so that if you would like to buy a GPU for gaming, you can. If you’d like to buy a GPU for crypto mining, either you can buy the CMP version, or if you really would like to use the GeForce to do it, unfortunately the performance will be reduced. This allows us to save our GPUs for the gamers, and hopefully, as a result, the pricing will slowly come down.

In terms of supply, it’s the case that the world’s technology industry has reshaped itself. As you know, cloud computing is growing very fast. In the cloud, the data centers are so big. The chips can be very powerful. That’s why die size, chip size continues to grow. The amount of leading-edge process it consumes is growing. Also, smartphones are using state of the art technology. The leading-edge process consumption used to see some distribution, but now the distribution is heavily skewed toward the leading edge. Technology is moving faster and faster.

The shape of the semiconductor industry changed because of these dynamics. In our case, we have demand that exceeds our supply. That’s for sure. However, as you saw from our last quarter’s performance, we have enough supply to grow significantly year over year. We have enough supply to grow in Q2 as we guided. We have enough supply to grow in the second half. However, I do wish we had more supply. We have enough supply to grow and grow very nicely. We’re very thankful for all of our supply chain and our partners supporting us. But the world is going to be reshaped because of cloud computing, because of the way that computing is going.

Question: When do you think the ongoing chip shortage problem could be solved?

Huang: It just depends on degree and for whom. As you know, we grew tremendously year over year. We announced a great quarter last year. Record quarter for GeForce, for workstations, for data centers. Although demand was even higher than that, we had enough supply to grow quite nicely year over year. We’ll grow in Q2. We’ll grow in the second half. We have supply to do that.

However, there are several dynamics that I think are foundational to our growth. RTX has reset computer graphics. Everyone who has a GTX is looking to upgrade to RTX. RTX is going to reset workstation graphics. There are 45 million designers and creators in the world, and growing. They used to use GTX, but now obviously everyone wants to move to RTX so they can do raytracing in real time. We have this pent-up demand because we reset and reinvented computer graphics. That’s going to drive our demand for some time. It will be several years of pent-up demand that needs to re-upgrade.

In the data center it’s because of AI, because of accelerated computing. You need it for AI and deep learning. We now add to it what I believe will be the long term biggest AI market, which is enterprise industries. Health care is going to be large. Manufacturing, transportation. These are the largest industries in the world. Even agriculture. Retail. Warehouses and logistics. These are giant industries, and they will all be based on AI to achieve productivity and capabilities for their customers.

Now we have that new platform that we just announced at Computex. We have many years of very exciting growth ahead of us. We’ll just keep working with our supply chain to inform them about the changing world of IT, so that they can be better prepared for the demand that’s coming in the future. But I believe that the areas that we’re in, the markets that we’re in, because we have very specific reasons, will have rich demand for some time to come.

AMD competition

Above: AI algorithms were developed on NVIDIA DGX servers at a U.S. Postal Service Engineering facility.

Image Credit: Nvidia

Question: I see that AMD just announced bringing their RDNA 2 to ARM-based SOCs, collaborating with Samsung to bring raytracing and VR features to Android-based devices. Will there be some further plan from Nvidia to bring RTX technology to consumer devices with ARM-based CPUs?

Huang: Maybe. You know that we build lots of ARM SOCs. We build ARM SOCs for robotics, for the Nintendo Switch, for our self-driving cars. We’re very good at building ARM SOCs. The ARM consumer market, I believe, especially for PCs and raytracing games–raytracing games are quite large, to be honest. The data set is quite large. There will be a time for it. When the time is right we might consider it. But in the meantime we use our SOCs for autonomous vehicles, autonomous machines, robots, and for Android devices we bring the best games using GeForce Now.

As you know, GeForce Now has more than 10 million gamers on it now. It’s in 70 countries. We’re about to bring it to the southern hemisphere. I’m excited about that. It has 1,000 games, 300 publishers, and it streams in Taiwan. I hope you’re using it in Taiwan. That’s how we’d like to reach Android devices, Chrome devices, iOS devices, MacOS devices, Linux devices, all kinds of devices, whether it’s on TV or a mobile device. For us, right now, that’s the best strategy.

Moore’s Law and die size

Above: Jensen Huang of Nvidia holds the world’s largest graphics card.

Image Credit: Nvidia

Question: I wanted to ask you about die size. Obviously with Moore’s Law, it seems we have the choice of using Moore’s Law to either shrink the die size or pack more transistors in. In the next few generations, the next three years or so, do you see die sizes shrinking, or do you think they’ll stay stable, or even rise again?

Huang: Since the beginning of time, transistor time, die sizes have grown and grown. There’s no question die sizes are increasing. Because technology cycles are increasing in pace, new products are being introduced every year. There’s no time to cost reduce into smaller die sizes. If you look at the trend, it’s unquestionably to the upper right. If you look at the application space that we see, talking very specifically about us, if you look at our die sizes, there are always reticle limits now. The reticle limits are pretty spectacular. We can’t fit another transistor. That’s why we have to use multi-chip packing, of course. We created NVLink to put a bunch of them together. There’s all kinds of strategies to increase the effective die size.

One of the important things is that cloud data centers–so much of the computing experience you have on your phone is because of computers in the cloud. The cloud is a much bigger place. The data centers are larger. The electricity is more abundant. The cooling system is better. The die size can be very large. Die size is going to continue to grow, even as transistors continue to shrink.

Building fabs?

Question: It’s expensive to spin up fabs, but in light of the prolonged silicon crunch, is that on the horizon for Nvidia to consider, spinning up a fab for yourself?

Huang: No. Boy, that’s the shortest answer I’ve had all night. It’s the only answer I know, completely. The reason for that, you know there’s a difference between a kitchen and a restaurant. There’s a difference between a fab and a foundry. I can spin up a fab, no doubt, just like I can spin up a kitchen, but it won’t be a good restaurant. You can spin up a fab, but it won’t be a good foundry.

A foundry is a service-oriented business that combines service, agility, technology, capacity, courage, intuition about the future. It’s a lot of stuff. The business is not easy. What TSMC does for a living is not easy. It’s not going to get any easier, and it’s not getting easier. It’s getting harder. There are so many people who are so good at what they do. There’s no reason for us to go repeating that. We should encourage them to develop the necessary capacity for our platform’s benefit.

Meanwhile, they now realize that the leading-edge consumption, leading-edge wafer consumption, the shape has changed because of the way the computing industry is evolving. They see the opportunity in front of them. They’re racing as fast as they can to increase capacity. I don’t think there’s anything I can do, that a fabless semiconductor company can do, that can possibly catch up to any of them. So the answer is no.

Lightspeed Studio

Above: Nvidia’s Clara AI for COVID-19 diagnosis from CT scans

Image Credit: Nvidia

Question: I wanted to ask a process question about Lightspeed Studio. Nvidia, a couple of years ago, spun up an internal development house to work on remastering older titles to help promote raytracing and the expansion of raytracing, but it’s been a couple of years since we heard about that studio. Do you have any updates about their future pipeline?

Huang: I love that question. Thank you for that. Lightspeed Studio is an Nvidia studio where we work on remastering classics, or we develop demo art that is really ground-breaking. The Lightspeed Studio guys did RTX Quake, of course. They did RTX Minecraft. If not for Lightspeed Studio, Minecraft RTX wouldn’t have happened. Recently they created Marbles, Marbles RTX, which has been downloaded and re-crafted into a whole bunch of marble games. They’ve been working on Omniverse. Lightspeed Studio has been working on Omniverse and the technologies associated with that, creating demos for that. Whenever you see our self-driving car simulating in a photorealistic, physically based city, that work is also Lightspeed Studio.

Lightspeed Studio is almost like Nvidia’s special forces. They go off and work on amazing things the world has never seen before. That’s their mission, to do what has been impossible before. They’re the Industrial Light and Magic, if you will, of realtime computer graphics.

DPUs

Above: The Nvidia BlueField-2 DPU.

Image Credit: Nvidia

Question: On the DPU side, could you give a quick narrative–now that you’ve announced BlueField 2 and you can buy these things in the market, people are starting to get them a bit more. A lot of the announcements, especially the Red Hat and IBM announcements with Morpheus, and the firewall announcements before, have been focused on the network side of DPUs. We know that DPUs and GPUs will combine in the future. But what is the road map looking like right now with market interest in DPUs?

Huang: BlueField is going to be a home run. This year BlueField 2 is being tested, and software developers are integrating it and developing software all over the place. Cloud service providers, we announced a bunch of computer makers that are taking BlueField to the market. We’ve announced a bunch of IT companies and software companies developing on BlueField.

There’s a fundamental reason why BlueField needs to exist. Because of security, because of software-defined data centers, you have to take the application plane, the application itself, and separate it from the operating system. You have to separate it from the software-defined network and storage. You have to separate it from the security services and the virtualization. You have to air gap them, because otherwise–every single data center in the future is going to be cloud native. You can’t protect it from the perimeter anymore. All of the intrusion software is coming in right from the cloud and entering into the middle of the data center, into every single computer. You have to make sure that every single server is completely secure. The way to do that is to separate the application, which could be malware, could be intrusion, from the control plane, so it doesn’t wander through the rest of the data center.

Now, once you separate it, you have a whole bunch of software you have to accelerate. Once you’ve separated the networking software down to BlueField, the storage software, the security service, and all the virtualization stack, that air gapping is going to cause a lot of computation to show up on BlueField. That’s why BlueField has to be so powerful. It has to be so good at processing the operating system of the world’s data center infrastructures.

Why are we going to start incorporating more AI into BlueField, into the GPU, and why do we want to put BlueField connected to our GPUs? The reason for that is because, if I can go backward, our GPUs will be in the data center, and every single data center node will be CPU plus a GPU for compute, and then it will be a BlueField with Tensor core processing, basically GPU, for AI necessary for realtime cybersecurity. Every single packet, every single application, will be monitored in real time in the future. Every data center will be in real time using AI to study everything. You’re not just going to secure a firewall at the edge of the data center. That’s way yesterday. The future is about zero trust, cloud native, high-performance computing data centers.

All the way out on the edge, you’ll have a very powerful, but it’s going to be on one chip–essentially an edge data center on one chip. Imagine a BlueField 4 which is really strong in security and networking and such. It has powerful ARM CPUs, data center scale CPUs, and of course our GPUs. That’s essentially a data center on one chip. We’ll put that on the edge. Retail stores, hospitals, banks, 5G base stations, you name it. That’s going to be what’s called the industrial edge AI.

However you want to think about it, the combination of BlueField and GPUs is going to be quite important, and as a result, you’ll see–where today, we have tens of millions of servers in data centers, in the future you’ll see hundreds of millions of server-class computers spread out all over the world. That’s the future. It’ll be cloud native and secure. It’ll be accelerated.

Limiting hash rates to thwart miners

Above: Nvidia’s RTX 3060 Ti is excellent.

Image Credit: GamesBeat

Question: Do you plan to limit hash rates in the future, and do you plan to release multiple versions of your products in the future, with and without reduced hash rates?

Huang: That second question, I actually don’t know the answer. I can’t tell you that I know the future. There’s a reason why we reduced hash rates. We want to steer. We want to protect the GeForce supply for gamers. Meanwhile, we created CMP for the crypto community. The combination of the two will make it possible for the price of GeForce to come down to more affordable levels. All of our gamers that want to have RTX can get access to it.

In the future, I believe–crypto mining will not go away. I believe that cryptocurrency is here to stay. It’s a legitimate way that people want to exchange value. You can argue about whether it has value store, but you can’t argue about value exchange. More important, Ethereum and other forms like it in the future are excellent distributed blockchain methods for securing transactions. You need that blockchain to have some fundamental value, and that fundamental value could be mined. Cryptocurrency is going to be here to stay. Ethereum might not be as hot as it is now. In a year’s time it may cool down some. But I think crypto mining is here to stay.

My intuition is that we will have CMPs and we’ll have GeForce. Hopefully we can serve the crypto miners with CMP. I also hope that crypto miners can buy–when mining becomes quite large, then they can create special bases. Or when it becomes super large, like Ethereum, they can move to proof of stake. It will be up and down, up and down, but hopefully never too big.

We’ll see how it turns out. But I think our current strategy is a good one. It’s very well-received. For us it increases, effectively, the capacity of our company, which we welcome. I’ll keep that question in mind. When I have a better answer I’ll let you know.

The Omniverse

Above: WPP is using Omniverse to build ads remotely.

Image Credit: Nvidia

Question: Omniverse feels like it could become the basis of future digital twin technology. Currently Nvidia is incorporating into Omniverse mainly in the graphics field and the simulation field. But how far can this Omniverse technology expand the concept, as with chemical technology or sound waves?

Huang: It’s hard to say about chemical technology. With sonic waves, sonic waves are propagation-based like raytracing, and we can use similar techniques to that. Of course there’s a lot more refraction, and sound can reverberate around corners. But that’s very similar to global illumination as well. Raytracing technology could be an excellent accelerator for sonic wave propagation. Surely we can use raytracing for microwave propagation, or even millimeter wave propagation, such as 5G.

We could, in the future, use raytracing to simulate, using Omniverse, traffic going through a city, and adapt the 5G radio, in real time, using AI to optimize the strength of the millimeter wave radios to the right antennas, with cars and people moving around them. Simulate the whole geometry of the city. Incredible energy savings, incredible data rate throughput improvement.

In the case of Omniverse, back to that again, let me make a couple of predictions. This is very important. I believe that there will be a larger market, a larger industry, more designers and creators, designing digital things in virtual reality and metaverses than there will be designing things in the physical world. Today, most of the designers are designing cars and buildings and things like that. Purses and shoes. All of those things will be many times larger, maybe 100 times larger, in the metaverse than in our universe. Number two, the economy in the metaverse, the economy of Omniverse, will be larger than the economy in the physical world. Digital currency, cryptocurrency, could be used in the world of metaverses.

The question is, how do we create such a thing? How do you create a world, a virtual world, that is so realistic that you’re willing to build something for that virtual world? If it looks like a cartoon, why try to bother? If it looks beautiful and its exquisite and it’s worthy of an artist to dedicate a lot of time to create a beautiful building, because it looks so beautiful, or you build a beautiful product that looks so beautiful, only available in the digital world–you build a car that’s only available in the digital world. You can only buy it and drive it in the digital world. A piece of art you can only buy and enjoy in the digital world.

Above: Nvidia Omniverse

Image Credit: Nvidia

I believe that several things have to happen. Number one, there needs to be an engine, and this is what Omniverse is created to do, for the metaverse that is photorealistic. It has the ability to render images that are very high fidelity. Number two, it has to obey the laws of physics. It has to obey the laws of particle physics, of gravity, of electromagnetism, of electromagnetic waves, such as light, radio waves. It has to obey the laws of pressure and sound. All of those things have to be obeyed. If we can create such an engine, where the laws of physics are obeyed and it’s photorealistic, then people are willing to create something very beautiful and put it into Omniverse.

Last, it has to be completely open. That’s why we selected the universal scene description language that Pixar invented. We dedicated a lot of resources to make it so that it has the ability to be dynamic, so that physics can happen through the USD, so that AI agents can go inside and out, so that these AI agents can come out through AR. We can go into Omniverse using VR, like a wormhole. And finally, Omniverse has to be scalable and in the cloud.

We have created an engine that is photoreal, obeys the laws of physics, rendering physically based materials, supports AI, and has wormholes that can go in and out using open standards. That’s Omniverse. It’s a giant body of work. We have some of the world’s best engineers and scientists working on it. We’ve been working on it for three years. This is going to be one of our most important bodies of work.

Some final thoughts. The computer industry is in the process of being completely reshaped. AI is one of the most powerful forces the computer industry has ever known. Imagine a computer that can write software by itself. What kind of software could it write? Accelerated computing is the path that people have recognized is a wonderful path forward as Moore’s Law in CPUs by itself has come to an end.

In the future, computers are going to continue to be small. PCs will do great. Phones will continue to be better. However, one of the most important areas in computing is going to be data centers. Not only is it big, but the way we program a data center has fundamentally changed. Can you imagine that one engineer could write a piece of software that runs across the entire data center and every computer is busy? And it’s supporting and serving millions of people at the same time. Data center scale computing has arrived, and it’s now the unit of computing. Not just the PC, but the entire data center.

Last, I believe that the confluence, the convergence of cloud native computing, AI, accelerated computing, and now finally the last piece of the puzzle, private 5G or industrial 5G, is going to make it possible for us to put computers everywhere. They’ll be in far-flung places. Broom closets and attics at retail stores. They’ll be everywhere, and they’ll be managed by one pane of glass. That one pane of glass will orchestrate all of these computers while they process data and process AI applications and make the right decisions on the spot.

Several of these dynamics are very important to the future of computing. We’re doing our best to contribute to that.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member