Even as AI models keep getting better and better, companies are spending ever-more amounts of money to train and run them.
The numbers coming out of this earnings season make that abundantly clear. Amazon, Google (Alphabet), Microsoft, and Meta — the four largest hyperscalers — have collectively guided their 2026 capital expenditures to roughly $715 billion, up more than 70% from the already-record $410 billion they spent in 2025. To put that in context, it is roughly as much as the US government spends annually on Medicare.

The Breakdown By Company
Amazon leads the pack with $200 billion in projected capex for 2026, up from around $125 billion the year before — a 60% jump in a single year. Most of this goes toward AWS infrastructure, data centers, and its homegrown Trainium AI chips, which have crossed a $20 billion annual revenue run rate growing at triple-digit percentages year-over-year.
Alphabet is nearly tied with Amazon, guiding to $180–190 billion in 2026 capex. CEO Sundar Pichai’s stated reason was blunt: “We are compute constrained in the near term.” Google Cloud grew 63% in Q1 2026 alone, and Pichai confirmed that cloud revenue would have been even higher had it been able to meet demand. Alphabet also issued a rare 100-year “century bond” — the first by a tech company since Motorola in 1997 — as part of a $32 billion debt offering to help fund this build-out.
Microsoft is tracking toward $190 billion in fiscal 2026 capex, well above the $152 billion average analyst estimate heading into earnings. CFO Amy Hood attributed $25 billion of the overage to rising memory chip and component costs, and told investors the company expects to remain capacity-constrained through at least the end of the year.
Meta, despite not being a cloud provider in the traditional sense, raised its full-year 2026 capex guidance by $10 billion to a range of $125–145 billion. The company cited higher component pricing, land scarcity, and competition for skilled workers to build data centers. Its priority is unambiguous: when Meta CFO Susan Li was asked about buybacks on the earnings call, she replied that the company’s highest priority is “investing our resources to position ourselves as a leader in AI.”
Q1 Alone Spent More Than The Manhattan Project
The pace of deployment is staggering even on a quarterly basis. In Q1 2026, the three largest hyperscalers — Alphabet, Amazon, and Microsoft — collectively spent $112 billion in a single quarter on infrastructure. That is more than three times the inflation-adjusted cost of the Manhattan Project, compressed into 90 days.
Amazon shelled out $45.17 billion in Q1 capex, Alphabet $35.67 billion (more than doubling year-over-year), and Microsoft $30.88 billion (up 84% year-over-year). Microsoft’s AI revenue crossed an annual run rate of $37 billion in the same quarter, up 123% year-over-year.
The ROI Question That Won’t Go Away
The obvious question — one that investors have been pressing on for two consecutive earnings cycles — is whether this spending will ultimately pay off. The free cash flow picture is difficult. Amazon is projected to turn free cash flow negative this year, with Morgan Stanley estimating a deficit of around $17 billion and Bank of America at $28 billion. Alphabet’s free cash flow is expected to drop by nearly 90% to $8.2 billion, from $73.3 billion in 2025. Microsoft fares better, but Barclays still sees a 28% slide before a recovery in 2027.
To bridge the gap, the hyperscalers have turned to the debt markets in unprecedented fashion. Bank of America forecasts hyperscaler debt issuance will hit $175 billion in 2026 — more than six times the $28 billion annual average of the prior five years.
The bull case, articulated by the hyperscalers themselves, is that demand is not just growing — it is structurally underserved. Google Cloud’s backlog of contracted data center rental agreements has grown to over $460 billion. Microsoft has an $80 billion backlog of Azure orders it cannot fulfill due to power constraints. As Eric Schmidt has warned, AI’s network effects tend to concentrate gains among a small number of players — and the hyperscalers are clearly betting that being one of those players is worth virtually any price of admission.
The Ripple Effects
The spending is reshaping markets well beyond the hyperscalers themselves. Nvidia is the most direct beneficiary: roughly 60% of AI infrastructure spend flows to chips and accelerators, and Nvidia controls 92% of the GPU market. Goldman Sachs estimates roughly $180 billion in GPU and accelerator purchases out of an estimated $450 billion in total AI infrastructure spend in 2026 alone.
The demand is also making AI software companies richer by the day. Anthropic’s annualized run rate has surged to $44 billion — up from $9 billion at end-2025 — in part because enterprises are consuming tokens at a pace that Semi Analysis describes as “arguably only in its early innings.” The company’s gross margins on inference have climbed from 38% to over 70% in the same period, a sign that the economics of AI delivery are improving even as the infrastructure bill grows.
Power and land constraints are increasingly the binding limits on deployment speed, not money or demand. Modern AI racks exceed 100 kW per rack, straining electrical infrastructure designed for a different era. “Speed to power” — how fast a data center can get online with stable electricity — has become the defining competitive variable in the infrastructure race.
What Comes Next
Morgan Stanley managing director Brian Nowak has projected Alphabet alone could spend up to $250 billion in 2027. Goldman Sachs estimates total hyperscaler capex from 2025 through 2027 will reach $1.15 trillion — more than double everything spent from 2022 to 2024. The five hyperscalers have plans to add roughly $2 trillion in AI-related assets to their balance sheets by 2030.
The bear thesis — that this is a bubble primed to burst — hinges on whether AI revenues can scale fast enough to justify infrastructure that is already consuming more capital than these companies generate in free cash flow. The bull thesis is that the constraint is supply, not demand, and that whoever builds the most capacity fastest will capture a disproportionate share of the AI economy.
For now, the spending continues to ratchet up. As Larry Page was quoted as saying: “I’m willing to go bankrupt rather than lose this race.” Whether that sentiment reflects wisdom or hubris is the $715 billion question of 2026.