Surging Rental Costs Of Older GPUs Signals Robust AI Demand

There are concerns around whether AI might be in a bubble, but the prices for even older GPU variants appears to provide some evidence to the contrary.

New data tracked by Silicon Data and Bloomberg shows that Neocloud rental prices for both A100 and H100 GPUs surged sharply at the start of 2026. According to a chart published by a16z, the H100 RT Index climbed to $2.41 per hour by late February, while the A100 RT Index reached $1.38 — both representing significant recoveries and highs relative to the prior several months. The trend is a strong signal that demand for AI compute remains robust, even for hardware that is no longer the newest generation on the market.

A Sustained Recovery, Not a Blip

What makes the data particularly striking is the context in which the surge is occurring. Both the A100 and H100 had seen their rental prices soften through much of late 2024, with the H100 index dipping close to its floor around $1.94–$2.00 during October and November. The recovery that began in December and accelerated through January and February 2026 suggests a structural demand pickup, not a short-term anomaly.

The A100, which NVIDIA launched in 2020, is by semiconductor standards a fairly mature product. Its pricing holding firm — and indeed rising — while newer chips are available in the market tells a clear story: enterprises and AI developers are consuming every viable GPU they can get their hands on. The economics of AI training and inference are hungry enough that older silicon remains commercially valuable.

User Demand Is Matching Infrastructure Appetite

The GPU pricing surge doesn’t exist in a vacuum. It mirrors a broader pattern of accelerating AI platform growth across consumer and enterprise products. Google Gemini’s web traffic grew 643% year-over-year in February 2026, while ChatGPT’s grew 37% over the same period, according to SimilarWeb data. Grok and Claude also posted triple-digit growth. These numbers reflect millions of additional inference calls every day — each one requiring GPU cycles to complete.

The implication is direct: as AI model usage continues to grow at a rapid pace, the underlying demand for compute does too. This demand is not just theoretical or investment-driven — it is operational. Businesses are deploying AI in production environments, and they need chips to run it.

Why Older GPUs Still Matter

The persistence of demand for A100s in particular reveals something important about how the AI infrastructure market works in practice. While NVIDIA’s H100 and the newer Blackwell-architecture GPUs command the highest performance benchmarks, not every workload needs cutting-edge silicon. Many fine-tuning tasks, smaller inference jobs, and cost-sensitive deployments can run efficiently on A100s — and at a lower cost per hour than newer alternatives, they represent an attractive option for budget-conscious operators.

This has kept the secondary and rental market for A100s alive and active well past the point at which one might have expected the hardware to be commoditized into irrelevance. The price floor has held, and the recent surge suggests that available supply is being absorbed faster than new capacity is coming online.

The Bubble Debate in Context

The GPU pricing data is one more data point pushing back against the narrative that AI investment and adoption has run ahead of real-world utility. Critics of the AI boom have pointed to high capital expenditure by hyperscalers, uncertain monetization timelines, and the volatility in AI startup valuations as signs of a potential bubble. Those concerns are not without merit, and distinguishing real trends from hype remains an important exercise for any investor or operator in this space.

But pricing in a commodity rental market is hard to fake. When enterprises are willing to pay $2.41 per hour for an H100 and $1.38 for an A100 in a competitive cloud market, that reflects genuine economic pressure — real workloads, real budgets, real demand. Speculation can inflate equity valuations; it does not so easily inflate the price of GPU-hours. The latest chart offers a grounded view of where AI compute demand actually stands. And by that measure, the picture looks less like a bubble deflating and more like an industry that continues to outpace available supply.

Posted in AI