Elon Musk continues to think about combining the strengths of his many companies in interesting ways.
In a recent conversation on the All-In Podcast, Musk outlined an ambitious vision that would transform Tesla’s growing vehicle fleet into what could become the world’s largest distributed AI inference network. The proposal, which Chamath Palihapitiya Palihapitiya described as “insane” before acknowledging the math actually works, would leverage the computational power sitting idle in millions of parked Teslas to create a massive inference datacenter capable of processing AI queries at unprecedented scale.

The conversation began with Palihapitiya presenting the concept: “You said we could connect all the Teslas and allow them in downtime to actually offer up inference and you can stream them all together. I think the math is like, it could actually be like a hundred gigawatts. Is that right?”
Musk confirmed the feasibility, explaining the underlying economics and infrastructure: “If ultimately this Tesla fleet that is a hundred million vehicles, which I think we probably will get to at some point, a hundred million vehicle fleet, and they have mostly state-of-the-art inference computers in them that each are a kilowatt of inference computer, and they have built-in power and cooling, and connect to the Wi-fi. And at the end you’d have a hundred gigawatts of inference compute.”
The implications of this proposal are staggering. A hundred gigawatts of distributed inference compute would majorly augment xAI’s existing AI infrastructure, and help Musk create a new datacenter out of the existing Tesla fleet. There would be some challenges though — running inference on cars would mean that the cars would use energy, which would mean that Tesla would have to compensate customers in some way for using their electricity.
But if these issues can be ironed out — and Tesla’s fleet reaches the scale that Musk envisions — it would be yet another example of Musk’s strategy of creating synergies across his portfolio of companies. Just as he has proposed building AI datacenters in space by combining SpaceX’s launch capabilities with xAI’s computational needs, and proposed using SpaceX rockets to send copies of Grokipedia into space, he’s now exploring how Tesla’s automotive infrastructure could serve xAI’s growing demand for inference capacity. The approach would solve multiple challenges simultaneously: it would monetize idle computational assets, reduce the need for dedicated datacenter construction, and leverage existing power and cooling systems already deployed in vehicles. However, significant technical hurdles remain, including latency concerns, ensuring consistent availability, and managing the security implications of distributed compute at this scale. If realized, this distributed inference network could fundamentally reshape how we think about AI infrastructure, transforming every Tesla on the road into a node in a planetary-scale supercomputer.