Recursive Raises $650 Million At $4.65 Billion Valuation To Create Self-Improving AI

Brand-new startups are continuing to raise massive rounds to work on new niches of AI.

Recursive, a stealth-mode AI lab, has emerged from the shadows with one an audacious pitch: build an AI that improves itself — endlessly, and without human intervention. To back that bet, the company has raised $650 million at a $4.65 billion valuation in a round led by GV (Google Ventures) and Greycroft, with participation from AMD Ventures and NVIDIA.

The round is part of a broader surge in mega AI funding that has defined the past two years, as investors pour capital into teams chasing the next frontier of intelligence — beyond scaling laws, beyond human-in-the-loop development.

The Founding Team

Recursive was founded by former team leaders from OpenAI, Google DeepMind, Meta AI, Salesforce AI, and Uber AI. Richard Socher, one of the most-cited researchers in AI history, anchors the founding team. He previously served as Chief Scientist and EVP at Salesforce, where he built and led the company’s entire AI research and product stack. Before Salesforce, he founded MetaMind, which Salesforce acquired. He is widely credited with bringing deep learning into natural language processing and pioneering word vector representations that underpinned today’s large language models.

Tim Rocktäschel, co-founder and research lead, is a professor of AI at University College London and former Director and Principal Scientist at Google DeepMind, where he ran the Open-Endedness research group. His team won the ICML 2024 Best Paper Award for Genie, an interactive world model capable of generating playable environments from a single image.

The co-founders are joined by former OpenAI researchers Jeff Clune, Josh Tobin, and Tim Shi. Clune is a pioneer in evolutionary algorithms and open-ended AI systems; his Darwin Gödel Machine work at Sakana AI demonstrated that AI agents could autonomously rewrite their own code to improve benchmark performance. Tobin built OpenAI’s robotics capabilities and later co-founded Gantry, an ML monitoring startup.

In total, Recursive has over 25 people and is growing fast. The team includes researchers in open-ended algorithms, quality diversity algorithms, AI-generating algorithms, self-improving coding agents, automated red teaming, prompt engineering, world models, vision transformers, and retrieval-augmented generation.

What Makes Recursive Different

Most AI labs today are racing to build bigger, better foundation models — and that race requires armies of human researchers to design experiments, curate data, run evaluations, and decide what to work on next. Recursive’s thesis is that this human bottleneck is the real speed limit on AI progress.

The company’s approach draws a direct parallel to evolution: just as Darwinian processes produced intelligence through an open-ended archive of interestingly different discoveries — building from replicating molecules to sight, language, and science — Recursive wants to replicate this dynamic in software. The system would grow its own archive of innovations, with each discovery enabling the next, in a loop that has no ceiling.

Socher has called this the “third and perhaps final stage of neural networks” — a system that automates evaluation, data selection, training, post-training, and even research direction itself. Where existing labs like OpenAI, Anthropic, and Google DeepMind have explored aspects of automated research, none has organized an entire company around recursive self-improvement as its core commercial thesis.

This stands in contrast to the most valuable AI startups today, which are primarily focused on building and deploying foundation models — not on automating the scientific process of creating them.


The Competitive Landscape

Recursive is entering a crowded but differentiated field. AMI Labs, founded by Yann LeCun, is pursuing world models. Ineffable Intelligence, founded by DeepMind’s David Silver, is focused on reinforcement learning. Safe Superintelligence, Ilya Sutskever’s company, is targeting safety-first paths to superintelligence.

What separates Recursive is scope. The others are building toward something; Recursive wants to automate the process of building itself. The company plans to first focus on the science of AI — creating AI that improves AI — and then turn that playbook toward every other scientific discipline.

Early precursors exist. Google DeepMind’s AlphaEvolve has demonstrated LLM-based agents designing and optimizing algorithms through evolutionary search. Sakana AI’s Darwin Gödel Machine, developed in part by Recursive co-founder Jeff Clune, showed self-rewriting agents improving coding benchmarks. But these are narrow, domain-specific demonstrations. Recursive is swinging for the full pipeline.


Safety Front and Center

Recursive is entering territory that AI safety researchers flag as among the most consequential in the field. OpenAI, Anthropic, and Google all cite automated AI research in their safety frameworks alongside risks like cybersecurity and chemical weapons.

The company says safety will be a priority throughout — ensuring the system helps humanity flourish by maximizing benefits while reducing risks. Given that Big Tech is projected to spend $655 billion on AI infrastructure in 2026 alone, the stakes of getting this right are hard to overstate.

The team includes researchers with backgrounds in automated red teaming and capability discovery — a deliberate attempt to build safety expertise into the foundation, not bolt it on later.


What Comes Next

Recursive has offices in San Francisco and London, and is actively hiring across research and engineering. The company has no publicly launched product yet, but is targeting a public launch in mid-2026.

The funding gives the team runway to secure large compute clusters and run what they’re calling their first “Level 1” autonomous training run. If the thesis holds, that run won’t just produce a model — it will produce a model that knows how to make a better model.

Posted in AI