Space datacenters would’ve been thought of as science fiction only a few years ago, but they’re quickly becoming a reality.
Starcloud, an NVIDIA-backed startup building AI datacenters in orbit, has achieved a significant milestone: training the first large language model in space. The company successfully used an NVIDIA H100 GPU aboard its Starcloud-1 satellite to train Andrej Karpathy’s nano-GPT model on the complete works of Shakespeare, marking a breakthrough moment for both the space industry and artificial intelligence.

“We have just used the NVIDIA H100 onboard Starcloud-1 to train the first LLM in space. Getting the first H100 to work in space required a lot of innovation and hard work from the incredible Starcloud team,” said Adi Oltean, co-founder and Chief Engineer at Starcloud, in announcing the achievement. The team also ran inference on a preloaded Gemma model and plans to test additional AI models in the future.
The successful training mission represents more than just a technical feat—it’s a proof of concept for Starcloud’s ambitious vision of moving computing infrastructure off Earth. “This is a significant first step toward moving almost all computing off Earth to reduce the burden on our energy supplies and take advantage of abundant solar energy in space,” Oltean noted.
Solving Datacenter Challenges from Orbit
The achievement comes months after Starcloud deployed its first satellite with a datacenter-class GPU into space. The company was founded on the premise that space offers fundamental advantages over terrestrial datacenters, which consume enormous amounts of electricity, require extensive cooling infrastructure, and face growing opposition from local communities concerned about their environmental impact.
According to Starcloud co-founder and CEO Philip Johnston, satellites in sun-synchronous orbit have access to constant, uninterrupted solar power, which the company harnesses through large deployable solar arrays. Space also provides what Johnston calls an “infinite heat sink”—the vacuum of space enables passive radiative cooling without water or energy-intensive systems.
These advantages translate into significant cost savings. Starcloud projects energy costs in space will be 10 times cheaper than land-based alternatives, even accounting for launch expenses. The company also estimates 10x carbon-dioxide savings over a datacenter’s lifetime compared to Earth-based facilities powered by traditional energy sources.
A Glimpse of the Future
The successful AI training mission validates Starcloud’s technological approach and brings the company closer to its long-term goals. Johnston envisions that within a decade, nearly all new datacenters will be built in outer space. By the early 2030s, Starcloud plans to deploy gigawatt-scale orbital datacenters using solar arrays spanning several kilometers.
The company isn’t alone in this vision. Amazon founder Jeff Bezos recently predicted that datacenters would be operating in space within 20 years. Elon Musk has an even more aggressive timeline, saying that within 5 years, the lowest cost to do AI compute will be in space. With Starcloud’s H100 GPU already training AI models in orbit, that future may arrive sooner than expected.
For now, the successful Shakespeare training mission demonstrates that space-based AI computing has moved from concept to reality—a development that could reshape both the datacenter industry and humanity’s relationship with computing infrastructure.