OpenAI has thus far depended on other providers for the chips to train its models, but it’s looking to take steps towards becoming self sufficient in that regard.
OpenAI will start the mass production of its own chips in partnership with chip maker Broadcom next year. Broadcom’s CEO referred to a new customer that has put in an order of $10 billion of chips. While neither company confirmed the news, it’s believed that OpenAI is the new partner. Broadcom’s shares rose 9 percent after the news, and are up 30 percent this year.

Thus far, OpenAI has been primarily using NVIDIA’s GPUs through Microsoft’s cloud infrastructure to train its models. Recently, it had also begun using Google’s TPU chips for inference to meet customer demand. Only a handful of companies have their own chips — NVIDIA is the market leader with its GPUs, and AMD also makes high-powered chips. Google has its own TPUs which it uses internally to train and run its models, but hasn’t yet made them broadly commercially available. Amazon has Trainium chips, while Microsoft and IBM also have some custom chips.
These chips are the building blocks of all AI applications. Not only are the used to create AI models, but they’re also used to run models against user queries once they’ve been created. As such, they’re a big part in the AI value chain. The demand for AI chips has made NVIDIA, the biggest player in the space, the most valuable company in the world.
It would be strategically advantageous for any AI company to have its own chips. Google appears to be the clear leader in this regard — it now not only has world-class models, but also has its TPU chips which have been around for several years. OpenAI, Anthropic and Grok have similarly powerful models, but don’t own the chips used to train and use them. OpenAI is the biggest startup in the AI space with a valuation of $500 billion, and seems to be the first to plug this gap and create chips of its own. It remains to be seen how these OpenAI-Broadcom chips fare, but OpenAI is signaling its intentions to control an ever-greater part of its AI supply chain,