- Nvidia reports fourth-quarter financial results on Wednesday after the bell.
- Investors are wary of any signs that Nvidia’s most important customers might be tightening their belts after years of big capital expenditures.
- This is particularly concerning in the wake of recent breakthroughs in AI out of China that suggested billions of dollars of Nvidia GPUs aren’t needed to train and use cutting-edge AI.
Nvidia reports fourth-quarter financial results on Wednesday after the bell.
It’s expected to put the finishing touches on one of the most remarkable years from a large company ever. Analysts polled by FactSet expect $38 billion in sales for the quarter ending in January, which would be a 72% increase on an annual basis.
The January quarter will cap off the second fiscal year where Nvidia’s sales more than doubled. It’s a breathtaking streak driven by the fact that Nvidia’s data center graphics processing units, or GPUs, are essential hardware for building and deploying artificial intelligence services like OpenAI’s ChatGPT. In the past two years, Nvidia stock has risen 478%, making it the most valuable U.S. company at times with a market cap over $3 trillion.
But Nvidia’s stock has slowed in recent months as investors question where the chip company can go from here.
It’s trading at the same price as it did last October, and investors are wary of any signs that Nvidia’s most important customers might be tightening their belts after years of big capital expenditures. This is particularly concerning in the wake of recent breakthroughs in AI out of China.
Much of Nvidia’s sales go to a handful of companies building massive server farms, usually to rent out to other companies. These cloud companies are typically called “hyperscalers.” Last February, Nvidia said a single customer accounted for 19% of its total revenue in fiscal 2024.
Morgan Stanley analysts estimated this month that Microsoft will account for nearly 35% of spending in 2025 on Blackwell, Nvidia’s latest AI chip. Google is at 32.2%, Oracle at 7.4% and Amazon at 6.2%.
This is why any sign that Microsoft or its rivals might pull back spending plans can shake Nvidia stock.
Last week, TD Cowen analysts said that they’d learned that Microsoft had cancelled leases with private data center operators, slowed its process of negotiating to enter into new leases and adjusted plans to spend on international data centers in favor of U.S. facilities.
The report raised fears about the sustainability of AI infrastructure growth. That could mean less demand for Nvidia’s chips. TD Cowen’s Michael Elias said his team’s finding points to “a potential oversupply position” for Microsoft. Shares of Nvidia fell 4% on Friday.
Microsoft pushed back Monday, saying it still planned to spend $80 billion on infrastructure in 2025.
“While we may strategically pace or adjust our infrastructure in some areas, we will continue to grow strongly in all regions. This allows us to invest and allocate resources to growth areas for our future,” a spokesperson told CNBC.
Over the last month, most of Nvidia’s key customers touted large investments. Alphabet is targeting $75 billion in capital expenditures this year, Meta will spend as much as $65 billion and Amazon is aiming to spend $100 billion.
Analysts say about half of AI infrastructure capital expenditures ends up with Nvidia. Many hyperscalers dabble in AMD’s GPUs and are developing their own AI chips to lessen their dependence on Nvidia, but the company holds the majority of the market for cutting-edge AI chips.
So far, these chips have been used primarily to train cutting edge AI models, a process that can cost hundreds of millions dollars. After the AI is developed by companies like OpenAI, Google and Anthropic, warehouses full of Nvidia GPUs are required to serve those models to customers. That’s why Nvidia projects its revenue to continue growing.
Another challenge for Nvidia is last month’s emergence of Chinese startup DeepSeek, which released an efficient and “distilled” AI model. It had high-enough performance that suggested billions of dollars of Nvidia GPUs aren’t needed to train and use cutting-edge AI. That temporarily sunk Nvidia’s stock, causing the company to lose almost $600 billion in market cap.
Nvidia CEO Jensen Huang will have an opportunity on Wednesday to explain why AI will continue to need even more GPU capacity even after last year’s massive buildout.
Recently, Huang has spoken about the “scaling law,” an observation from OpenAI in 2020 that AI models get better the more data and compute are used when creating them.
Huang said that DeepSeek’s R1 model point to a new wrinkle in the scaling law that Nvidia calls “Test Time Scaling.” Huang has argued that the next major path to AI improvement is by applying more GPUs to the process of deploying AI, or inference. That allows chatbots to “reason,” or generate a lot of data in the process of thinking through a problem.
AI models are trained only a few times to create and fine-tune them. But AI models can be called millions of times per month, so using more compute at inference will require more Nvidia chips deployed to customers.
“The market responded to R1 as in, ‘oh my gosh, AI is finished,’ that AI doesn’t need to do any more computing anymore,” Huang said in a pre-taped interview last week. “It’s exactly the opposite.”