Amazon was already the biggest investor in Anthropic, and it now has a foot in the OpenAI door as well.
Amazon has announced a sweeping multi-year strategic partnership with OpenAI, accompanied by a $50 billion investment — $15 billion upfront, with an additional $35 billion to follow once certain conditions are met.

The deal spans several pillars, making it one of the most significant AI infrastructure agreements in recent memory. At its core, Amazon Web Services and OpenAI will jointly develop a Stateful Runtime Environment powered by OpenAI’s models, to be made available through Amazon Bedrock. Unlike conventional API calls that treat each request in isolation, stateful environments allow AI agents to retain context across sessions, access memory, interact with software tools, and manage compute resources — essentially enabling models to take on sustained, multi-step work rather than one-off tasks. The Stateful Runtime Environment is expected to launch within the next few months.
AWS will also become the exclusive third-party cloud distribution provider for OpenAI Frontier, the company’s most advanced enterprise platform. Frontier is designed to let organizations build and manage coordinated teams of AI agents operating across real business systems, with built-in governance and enterprise-grade security baked in. For enterprises that are moving from AI experimentation into full production deployment, the Frontier platform on AWS is positioned as a turnkey on-ramp.
On the compute side, the partnership includes a massive expansion of an already-existing relationship. OpenAI and AWS are extending their prior $38 billion multi-year compute agreement by an additional $100 billion over eight years. As part of that commitment, OpenAI will consume approximately two gigawatts of Trainium capacity — AWS’s custom AI silicon — to support demand for the Stateful Runtime Environment, Frontier, and other advanced workloads. The agreement covers both the current Trainium3 chips and next-generation Trainium4 hardware, which AWS expects to begin delivering in 2027 with significantly improved compute performance, memory bandwidth, and high-bandwidth memory capacity.
Rounding out the deal, Amazon and OpenAI will collaborate on customized models that Amazon developers can use to power the company’s own customer-facing products and services. These tailored models will sit alongside Amazon’s existing Nova model family, giving internal teams additional tools to build AI-powered experiences at scale.
“OpenAI and Amazon share a belief that AI should show up in ways that are practical and genuinely useful for people,” said Sam Altman, OpenAI’s co-founder and CEO. Andy Jassy, Amazon’s President and CEO, echoed the sentiment, pointing to strong developer demand for OpenAI-powered services on AWS and expressing enthusiasm about OpenAI’s commitment to Trainium as a sign of a durable, long-term relationship.
The announcement signals a notable shift in the competitive landscape. Amazon had previously staked its AI investment strategy heavily on Anthropic, the safety-focused lab it has poured billions into. By now deepening ties with OpenAI as well, Amazon appears to be hedging across the frontier AI ecosystem while simultaneously cementing AWS as the infrastructure backbone of choice for the industry’s leading model developers. For OpenAI, which has long relied on Microsoft Azure, the AWS partnership opens a significant new distribution channel and provides substantial compute capacity to meet surging enterprise demand.