SpaceX & Cursor Collaborate To Build ‘World’s Best Coding Model’, SpaceX To Have Right To Buy Cursor For $60 Billion

Two companies which are looking to catch up in the AI coding space have joined forces with an interesting new deal.

SpaceX — which now owns xAI — and Cursor have announced a partnership to build what they’re calling the “world’s best coding and knowledge work AI.” The announcement, posted by SpaceX on X, says the two companies are “working closely together,” combining Cursor’s distribution among professional developers with SpaceX’s Colossus supercomputer, which the company claims has a million H100-equivalent GPUs. Cursor CEO Michael Truell confirmed the deal on his own account, calling it “a meaningful step on our path to build the best place to code with AI” and specifically citing plans to scale up Composer, Cursor’s proprietary coding model.

The financial terms are striking. SpaceX gets the right to acquire Cursor later this year for $60 billion — a figure that would roughly double Cursor’s current ~$30 billion valuation. If the collaboration doesn’t yield a deal, Cursor has agreed to pay SpaceX $10 billion for use of its compute.

Why Both Sides Need This

The partnership is, at its core, a defensive maneuver from two players who’ve been losing ground.

xAI’s situation is dire. All 11 of its non-Elon Musk co-founders have now left the company, with the exodus accelerating after SpaceX’s February 2026 acquisition of xAI. Musk himself admitted publicly that xAI “was not built right first time around” and called out its coding tools specifically for failing to compete — saying the company needed to “essentially catch up and exceed our competitors on coding.” Somewhat embarrassingly, xAI’s own teams had reportedly been using Anthropic’s Claude via Cursor to write code before Anthropic cut off their access.

Cursor’s pressures are different but equally real. For much of 2025, it was the breakout product of the vibe coding wave — writing a billion lines of code a day at its peak. But Anthropic’s Claude Code has since reached a $2.5 billion annual run rate with over 300,000 business customers, and OpenAI’s Codex has seen usage surge 10x in weeks. The structural problem is that Cursor effectively pays retail for the models that Anthropic and OpenAI offer their own tools wholesale. The category Cursor built its business on — the AI IDE — is itself being disrupted, with Claude Code and Codex moving into terminal-native and OS-level agent territory that the IDE can’t easily follow.

Cursor’s answer has been Composer, its proprietary model it’s been building since 2025 to reduce dependence on Anthropic and OpenAI. That’s exactly where SpaceX’s Colossus fits in — training at scale requires compute that Cursor doesn’t have and SpaceX does.

The Deal’s Logic

For xAI, this is a smart use of stranded assets. Colossus is one of the largest AI training clusters in the world, but without a functioning research team to run experiments, those GPUs aren’t building competitive models. Cursor brings the technical talent and, crucially, the distribution — 67% of Fortune 500 companies use the product, and it has direct access to the professional developers whose usage patterns are training signal gold.

The acquisition option is also structured intelligently. If the collaboration produces a genuinely competitive coding model, buying Cursor for $60 billion would give SpaceX/xAI the product, the distribution, and the talent in one move — at a valuation that reflects Cursor’s current trajectory, not a distressed-sale discount. If the model underperforms and no acquisition happens, the $10 billion payment functions as a put option: SpaceX monetizes its compute regardless of outcome.

For Cursor, the calculus is similar. Partnering with a compute-rich counterparty to train Composer more aggressively is cheaper and faster than raising another round purely for GPU access. And the $60 billion acquisition price, if it comes to pass, would hand Cursor’s founders and investors a clean exit at a premium. If Cursor’s product intuition and developer relationships can be combined with serious training runs on Colossus, the result could be a model that competes on the benchmarks that matter.

Posted in AI