There’s plenty of speculation if we’re in an AI bubble, but NVIDIA CEO Jensen Huang has an interesting test on how one could ascertain if we’re really in one.
Speaking at the World Economic Forum in Davos, Huang offered a practical, market-based approach to evaluating whether the AI investment frenzy represents a genuine technological shift or speculative excess. Rather than pointing to valuations or venture capital flows, the NVIDIA chief executive highlighted a simple supply-and-demand metric that anyone can verify: try to rent a GPU.

“One good test on the AI bubble is to recognize that NVIDIA has now millions of NVIDIA GPUs in the cloud, in every cloud. We’re used everywhere,” Huang said. “And if you try to rent an NVIDIA GPU these days, it’s so incredibly hard and the spot price of GPU rentals is going up—not just the latest generation, but two-generation-old GPUs. The spot price of rentals are going up.”
According to Huang, this pricing pressure reflects fundamental demand from an expanding universe of AI companies and traditional enterprises reallocating their research budgets. He pointed to pharmaceutical giant Eli Lilly as a case study of this broader transformation.
“Three years ago most of their R&D budget, all of their R&D budget, was probably wet labs,” Huang noted. “Notice the big AI supercomputer that they’ve invested in, the big AI lab. Increasingly that R&D budget’s going to shift towards AI.”
Huang acknowledged that the scale of investment might superficially resemble a bubble. “The AI bubble comes about because the investments are large, and the investments are large because we have to build the infrastructure necessary for all of the layers of AI above it,” he explained.
The implications of Huang’s test are significant. If we were truly in a speculative bubble disconnected from real demand, one would expect excess capacity and falling prices for compute resources. This had happened during the dot-com era, when internet companies had laid cables which weren’t seeing a lot of use. But with AI, the persistent scarcity and rising rental costs for even older GPU generations suggests genuine, broad-based adoption. Companies across industries—from drug discovery to financial services to manufacturing—are actively deploying AI applications that require substantial computing power, not merely experimenting with prototypes.
This infrastructure buildout extends beyond cloud providers. Major technology companies have announced tens of billions in capital expenditures for AI data centers. Microsoft, Google, Amazon, and Meta have all significantly increased their infrastructure spending, and their plans show no signs of slowing down. Meanwhile, enterprises are racing to secure GPU capacity through long-term contracts, often paying premiums for guaranteed access. The constraint isn’t capital or enthusiasm—it’s the physical availability of advanced chips and the data centers to house them, suggesting that least for now, the current AI wave is built on tangible demand rather than just speculative hope.