Silicon Valley Is “Retarded” If It Thinks AI Won’t Be Nationalized: Palantir CEO Alex Karp

AI companies are already clashing with the US government, as with the recent Anthropic controversy, and things could just get worse from here.

Palantir co-founder and CEO Alex Karp has delivered one of the most blunt warnings yet to Silicon Valley’s AI elite: ignore the politics of job displacement at your peril, or risk losing control of your technology altogether. Speaking at a16z’s American Dynamism Summit, Karp argued that if AI companies hollow out the white-collar class while simultaneously shortchanging the military, they are walking straight into nationalization.

richest african americans alexander karp

“Now you get to Silicon Valley,” Karp said. “My one message — and again, I don’t want to get into specific people or name names — is this: if Silicon Valley believes we are going to take away everyone’s white-collar job, meaning primarily Democratic-leaning people whom I grew up with, highly educated people who went to elite schools or went to schools that are almost elite, who vote for one party, and you’re going to screw the military — if you don’t think that’s going to lead to nationalization of our technology, you’re retarded.”

He didn’t stop there — Karp turned the insult into something almost philosophical: “And you might be particularly retarded because you have a 160 IQ. But this is where that path is going.”

The remark lands in a very specific context. Karp’s company, Palantir, has built its entire business model around close partnerships with the US military and intelligence community. He is not a detached observer warning from the sidelines — he is perhaps the AI executive most directly invested in the question of how Silicon Valley and the American state coexist. His argument, stripped of its provocative language, is essentially a political economy thesis: democratic societies will not indefinitely tolerate a powerful private industry that displaces educated voters while also refusing to serve national security interests. At some point, governments act.

Karp’s warning takes on an uncanny timeliness given what has unfolded in Washington in the past week. Anthropic publicly refused to allow its models to be used for mass domestic surveillance and fully autonomous weapons systems and paid a steep price. The Trump administration designated Anthropic a “supply chain risk” to national security, ordered every federal agency to stop using its technology, and gave the company six months to facilitate a transition to another provider. President Trump personally called the company “radical left” and “woke.” Secretary of War Pete Hegseth accused Anthropic’s CEO of seeking “veto power over the operational decisions of the United States military.”

The confrontation did not end there. Just hours after banning Anthropic, the US government announced a new partnership with OpenAI to deploy its models on classified military networks — on terms that, ironically, included the very same restrictions on domestic surveillance and autonomous weapons that Anthropic had insisted upon. The episode illustrated, in almost theatrical fashion, exactly the kind of government pressure Karp is warning about: the state does not need to formally nationalize an AI company to effectively determine what it can and cannot do, who it can and cannot serve, and whether it survives in its current form.

The Anthropic saga is not an isolated incident but part of a broader pattern. Across Silicon Valley, AI companies are rapidly becoming enmeshed in national security contracting, and with that comes political exposure that few in the industry appear fully prepared for. OpenAI CEO Sam Altman — no ally of Anthropic’s — initially warned that the Pentagon should not be threatening AI companies with the Defense Production Act, before his own company stepped in to fill the gap Anthropic left. Defense tech figures like Palmer Luckey, meanwhile, argued that Anthropic’s position was fundamentally untenable in a democracy, raising harder questions about whether private AI companies should be able to override elected governments on matters of military policy at all.

Karp’s warning speaks to both sides of this tension. The political arithmetic he describes is not complicated: displacing millions of educated, politically active workers while refusing to serve the military is a combination that few governments — regardless of party — would tolerate indefinitely. Whether Silicon Valley’s AI elite chooses to hear him is another matter. In Karp’s telling, the ones most likely to miss the warning are exactly those who believe their intelligence puts them above political reality. That, in his view, is precisely the problem.

Posted in AI