We’re Early In The AI Capex Cycle, Our 8 Year Old TPUs Are Seeing 100% Utilization: Google’s Amin Vahdat

There has been some chatter that massive AI infrastructure investments are unsustainable and could be getting ahead of demand, but Google has indicated that might not be the case.

In an assessment of the current AI infrastructure landscape, Amin Vahdat, a senior executive overseeing Google’s AI infrastructure, revealed in October 2025 that the company’s decade-old tensor processing units are running at full capacity—an indicator of the insatiable demand for AI compute resources. Vahdat’s comments paint a picture of an industry where supply constraints, not demand weakness, remain the defining challenge.

“We’re early in the cycle, what I would say, certainly relative to the demand that we’re seeing from our internal users,” Vahdat explained. “We’ve been building TPUs for 10 years, so we have now seven generations in production for internal and external use. Our seven and eight-year-old TPUs have 100% utilization.”

The implications of this statement are significant. In the technology sector, hardware typically depreciates rapidly, with newer generations rendering older equipment obsolete. Yet Google’s experience tells a different story. “That just shows what the demand is,” Vahdat continued. “Everyone would of course prefer to be on the latest generation, but whatever they can get.”

Perhaps most revealing was Vahdat’s assessment of the projects being delayed due to capacity constraints. “This tells me that the demand is tremendous, but also who we’re turning away and the use cases that we’re turning away—it’s not like, ‘oh yeah, that’s kind of cool.’ It’s, ‘oh my gosh, we’re actually not going to invest in this and there’s no option because that’s where we are on the list.'”

Vahdat’s comments suggest that Google is having to make difficult choices about which high-priority AI projects receive compute resources, with potentially transformative applications being shelved simply due to infrastructure availability. This isn’t a story of marginal demand for experimental features—it’s mission-critical work being constrained by hardware availability.

The broader context supports Vahdat’s assessment. Google’s TPU architecture has become increasingly central to the AI industry’s operations. The company trained its benchmark-topping Gemini 3 Pro model exclusively on TPUs, demonstrating the chips’ capability for frontier AI development. More significantly, Anthropic recently signed a deal with Google to access 1 million TPUs for its own AI development work, representing a massive external validation of both the demand for specialized AI hardware and Google’s position as a major infrastructure provider.

These developments suggest that rather than approaching a plateau, AI infrastructure investment may indeed be in its early innings. If even legacy hardware from nearly a decade ago commands full utilization, the industry appears far from the point of oversupply that some analysts have warned about. For companies betting billions on AI infrastructure buildouts, Vahdat’s perspective offers reassurance that demand remains robust—and perhaps more importantly, that the use cases being constrained by current capacity represent genuinely valuable applications rather than speculative experimentation.

Posted in AI