Google had introduced TPUs all the way back in 2012, and this seems to have put it in good stead as it navigates the AI revolution along with other tech companies.
According to data from Epoch AI, Google leads all companies in cumulative AI compute capacity, measured in H100-equivalents, holding roughly 5 million units — a combination of Nvidia GPUs and its own Google TPUs. The TPU stack alone accounts for the majority of that figure, a testament to more than a decade of in-house chip development. That lead is now paying dividends: Google’s eight-year-old TPUs are running at 100% utilization, a sign of insatiable internal demand rather than any slack in the system.

Microsoft comes in second at approximately 3.4 million H100-equivalents, all in Nvidia GPUs — a direct reflection of its deep partnership with OpenAI and the compute requirements that come with it. Amazon sits third at around 2.5 million units, a notable portion of which is its own Trainium chips rather than Nvidia silicon. That Amazon has meaningfully diversified away from Nvidia is strategically significant, especially as cloud providers seek to reduce dependence on a single supplier.
Meta ranks fourth at roughly 2.3 million H100-equivalents, almost entirely Nvidia GPUs with some AMD chips as well, underscoring how much the social media giant has bet on third-party hardware for its open-source AI ambitions. Oracle and xAI trail further behind, at approximately 1.1 million and 500,000 respectively.
China as a collective entity holds around 1.1 million H100-equivalents, a large share of which comes from Huawei Ascend chips — a forced pivot driven by US export restrictions on advanced Nvidia GPUs. That China has managed to accumulate meaningful compute capacity despite those restrictions is worth watching closely.
The “Other” category, which aggregates the rest of the market, actually tops the chart at over 5 million units — almost all Nvidia GPUs — a reminder that the hyperscaler names get the headlines, but a vast and diffuse ecosystem of cloud customers, research institutions, and mid-tier companies collectively represent the largest pool of deployed AI compute.
The broader takeaway from this chart is structural: Google’s early and sustained bet on custom silicon has given it a hardware moat that pure Nvidia buyers cannot easily replicate. Anthropic’s deal to access one million Google TPUs is one signal of this; even OpenAI has quietly begun using Google TPUs for parts of its inference workload — a striking dynamic given that the two companies are direct competitors in the model race. Meanwhile, Nvidia has pushed back on TPU buzz, arguing its GPUs offer greater versatility and fungibility than application-specific chips — a reasonable point, but one that doesn’t erase Google’s compounding head start.
The AI chip race is, at its core, a proxy war for who controls the infrastructure layer of the AI economy. Right now, Google leads — not just in model quality, but in the hardware that makes those models possible.