Andrej Karpathy Indicates That Fewer People Leave Anthropic To Start Companies Because Its Shares Aren’t As Liquid

Anthropic has been relatively insulated from the recent madness in the tech space, with researchers rapidly switching labs and starting new companies, but there might be an unexpected reason why this is the case.

A recent exchange on X has shed light on what may be the real driver behind Anthropic’s notably low employee turnover compared to its competitors in the AI race. The conversation began when an X user observed a striking pattern in the industry: “There are dozens or perhaps a couple hundred ex-OpenAI, xAI, Google DeepMind researchers founding companies in the current climate. There are, as far as I know, zero people leaving to found startups out of Anthropic. Really makes you think.”

The response from xAI researcher Liangchen Luo was blunt: “The simplest answer: the liquidity of Anthropic options is the worst among those frontier labs.”

This explanation caught the attention of Andrej Karpathy, former director of AI at Tesla and a respected voice in the artificial intelligence community. Karpathy’s response—a target-hitting emoji followed by pointed commentary—suggested Luo had hit upon something fundamental. “It’s interesting how large of a fraction of people don’t see the dominant first order term that drives behavior of people and companies,” Karpathy wrote. “You can construct a powerful world model just by 1) understanding the system and 2) assuming there is only this single term.”

The Churn Differential

The contrast in employee retention between Anthropic and its competitors has become increasingly apparent throughout 2024 and 2025. While OpenAI, Google DeepMind, and xAI have all experienced significant talent drain—whether to competing labs or to new startup ventures—Anthropic has remained remarkably stable.

This difference became especially stark during Meta’s aggressive talent acquisition campaign in mid-2025. When Mark Zuckerberg launched his “Superintelligence Labs” initiative, Meta successfully poached at least eight researchers from OpenAI, two from Google DeepMind, and one from Anthropic. The disparity was notable: OpenAI lost multiple high-profile researchers including Jiahui Yu, Shuchao Bi, Shengjia Zhao, and Hongyu Ren, while Anthropic remained largely unscathed.

OpenAI’s chief research officer Mark Chen described the exodus as feeling like “someone has broken into our home and stolen something”, and the company was forced to give employees a week off to recover from the turmoil. Meanwhile, Anthropic quietly continued its work with minimal disruption.

The Culture Narrative

The conventional explanation for Anthropic’s low churn rate has centered on the company’s distinctive culture and mission. Founded by former OpenAI executives Dario Amodei and Daniela Amodei in 2021, Anthropic has positioned itself as the safety-first AI lab, emphasizing responsible AI development and alignment research. The company’s commitment to building AI systems that are “helpful, harmless, and honest” has resonated with researchers who prioritize ethical considerations alongside technical advancement.

This mission-driven approach has been cited frequently as a key differentiator in the talent wars. Many in the industry have assumed that Anthropic employees simply don’t want to give up working at a company where safety and ethics are paramount, choosing to stay rather than chase potentially lucrative opportunities elsewhere or at startups with less rigorous safety commitments.

The Financial Reality

However, Karpathy’s endorsement of the liquidity explanation suggests that the story may be more prosaic—and more financial—than the cultural narrative implies. The liquidity of equity compensation is a critical factor in any tech employee’s calculus, particularly at late-stage private companies with high valuations.

As of July 2025, Anthropic was valued at approximately $64.69 billion. It had then secured a $13 billion Series F financing at a $183 billion valuation in September 2025. For employees holding stock options, these astronomical valuations are meaningless without a path to liquidity—a way to actually convert those paper gains into cash.

OpenAI, by contrast, has been more aggressive in providing liquidity options. The company has explored a $500 billion valuation via a secondary share sale for current and former employees, signaling a clear path for employees to cash out their equity. Similarly, xAI—founded by Elon Musk—has offered employees potential liquidity through its various funding rounds. xAI merged with social media company X in an all-stock deal valuing xAI at about $80 billion and X at $33 billion, creating additional pathways for employee equity to find value.

Why Liquidity Matters for Entrepreneurship

The connection between equity liquidity and entrepreneurial activity is straightforward: starting a company requires capital, and for many researchers, their primary source of wealth is equity compensation from their current employer. If that equity is illiquid—locked up with no clear path to convert it to cash—then leaving to start a company becomes financially risky or even impossible.

An OpenAI researcher who can sell a portion of their vested options through a secondary market transaction has the financial runway to take the leap into entrepreneurship. An Anthropic researcher with equally valuable options on paper, but no mechanism to actually sell them, faces a very different calculation — they might have the wealth required to take the entrepreneurial plunge, but it exists only in theory.

This dynamic creates what economists call a “golden handcuffs” effect, but with an important twist: it’s not just about unvested equity that would be lost by leaving. Even fully vested options are worthless if there’s no buyer, and Anthropic’s more restrictive approach to liquidity means that employees face a stark choice: stay and hope for an eventual IPO or acquisition, or leave and potentially forfeit millions in unrealized gains.

The Implications

If Karpathy is right—and his track record and reputation suggest his view should be taken seriously—then the implications for the AI industry are significant. It suggests that Anthropic’s stability may have less to do with its admirable commitment to AI safety and more to do with the structure of its equity compensation. While the company’s culture and mission certainly matter, they may be secondary to basic financial incentives.

This doesn’t diminish Anthropic’s achievements or its genuine focus on safety research. The company has seen its annualized revenue surge from $1 billion to over $5 billion, with over 300,000 business clients. Its Claude models are widely respected for their capabilities and thoughtful approach to AI alignment. But it does suggest that understanding the AI talent landscape requires looking beyond mission statements to the financial mechanics that shape individual decisions.

For Anthropic, this explanation is something of a double-edged sword. On one hand, low employee turnover is valuable—it maintains institutional knowledge, preserves research momentum, and avoids the disruption that comes with talent loss. On the other hand, if employees are staying primarily because they can’t access their equity value, that could breed resentment and potentially create a retention problem in the future if liquidity expectations aren’t met.

The broader lesson, as Karpathy suggests, is about the power of simple economic incentives to explain complex organizational behaviors. In the high-stakes world of frontier AI research, where mission-driven narratives and ambitious visions of AGI dominate the discourse, it’s easy to forget that for individual employees making career decisions, practical financial considerations often trump ideological commitments. Understanding this “dominant first order term,” as Karpathy puts it, may be key to making sense of the AI industry’s evolving talent dynamics.

As the AI race intensifies and valuations continue to climb—with OpenAI at $500 billion, Anthropic at $183 billion, and competition fierce across the board—the question of equity liquidity may become increasingly central to how these companies compete for talent. Culture matters, mission matters, but according to one of the industry’s most respected observers, liquidity may matter most of all.

Posted in AI