Tuesday, April 1, 2025

Nvidia CEO Jensen Huang reinforces the company’s role in the ‘evolving’ AI trade

Must read

Nvidia (NVDA) CEO Jensen Huang took to the company’s fourth quarter earnings call on Wednesday, looking to reaffirm the chip giant’s place in the AI trade — and calm Wall Street jitters around the technology’s future growth.

Shares of Nvidia were off more than 7% on the year going into Wednesday’s print as investors and analysts raised questions about Big Tech’s continued AI spending. The stock sank 8% by the close of trading on Thursday as those doubts persisted.

The fears: That the rise of DeepSeek’s AI models meant developers didn’t need to use pricey chips like Nvidia’s Blackwell GPUs — and that custom chips developed by Nvidia customers like Amazon (AMZN) and Google (GOOG, GOOGL) could threaten the company’s long-term health.

Huang held off on making opening remarks during Nvidia’s call. Instead, he answered analysts’ questions throughout and closed with comments explaining how models like DeepSeek’s will require even more power than earlier models.

When DeepSeek debuted its R-1 model in January, it sent AI stocks into a tailspin.

That’s because the company says it developed the software, which rivals OpenAI’s platform, using Nvidia’s H20 chips. Those processors are far less powerful than the AI titan’s Blackwell chips, leading investors to question whether Nvidia was facing an existential crisis. After all, if companies could create AI platforms using more affordable chips, why would they need to spend billions on Nvidia’s high-end processors?

NasdaqGS – Nasdaq Real Time Price USD

As of 10:18:20 AM EST. Market Open.

NVDA AVGO MRVL

Huang, however, explained that because DeepSeek’s model, and others like it, provided better responses when using more powerful AI chips, Nvidia will continue to benefit from their use.

“The more the model thinks the smarter the answer. Models like OpenAI, Grok 3, DeepSeek-R1 are reasoning models that apply inference time scaling,” Huang said. “Reasoning models can consume 100x more compute. Future reasoning models can consume much more compute.”

Huang also explained that models like DeepSeek are driving demand for inferencing, the process of running AI applications. Training AI models requires a massive amount of power and performance. But as inferencing becomes the main use case for AI systems, Wall Street investors have questioned whether Nvidia’s customers will opt for cheaper, less powerful chips.

Huang, however, contends that DeepSeek’s models, and those like them, illustrate that inferencing will require plenty of power in their own right.

In addition to addressing DeepSeek, Huang also hit on the impact of ASICs on the industry — and what it could mean for Nvidia. ASICs or application-specific integrated circuits, are chips designed specifically for, as you might guess, specific applications. Google used its Tensor Processing Unit, which is an ASIC, to train its Gemini AI platform.

Latest article