There is plenty of concern around how AI could ultimately take on and eliminate humans, but the more likely scenario could be a lot more insidious than that.
David Sacks, appointed as the White House AI and crypto czar by President Trump, has offered a stark warning about artificial intelligence that diverges sharply from Hollywood narratives. Speaking recently, Sacks argued that the greatest threat from AI isn’t physical annihilation but rather the subtle manipulation of truth and the systematic control of information—a vision more aligned with George Orwell’s dystopian masterpiece than James Cameron’s science fiction blockbuster.

“I almost feel like the term ‘woke AI’ is insufficient to explain what’s going on because it somehow trivializes it,” Sacks said. “What we’re really talking about is Orwellian AI. We’re talking about AI that lies to you, that distorts and answers, that rewrites history in real time to serve a current political agenda of the people who are in power. It’s very Orwellian.”
Sacks pointed to recent developments as evidence of this trajectory. “We were definitely on that path before President Trump’s election. It was part of the Biden EO. We saw it happen in the release of that first Gemini model. That was not an accident. Those distorted outputs came from somewhere.”
The venture capitalist and technology executive, known for his role in building PayPal and later investing in companies like Facebook and Airbnb, offered a provocative reframing of AI risk. “To me, this is the biggest risk of AI actually. It was not described by James Cameron. It was described by George Orwell. In my view, it’s not the Terminator, it’s 1984.”
Sacks elaborated on his concerns about how AI systems could become instruments of control. “As AI eats the internet and becomes the main way that we interact and get our information online, it’ll be used by the people in power to control the information we receive. It’ll contain ideological bias. Essentially it’ll censor us all. That trust and safety apparatus that was created for social media will be ported over to this new world of AI.”
Beyond content control, Sacks warned about surveillance implications. “On top of that, you’ve got the surveillance issues where AI’s gonna know everything about you. It’s gonna be your kind of personal assistant. And so it’s kind of the perfect tool for the government to monitor and control you. And to me that is by far the biggest risk of AI. And that’s the thing we should be working towards preventing.”
He concluded with a critique of current regulatory approaches: “The problem is a lot of these regulations that are being whipped up by these fearmongering techniques—they’re actually empowering the government to engage in this type of control that I think we should all be very afraid of actually.”
Sacks’s comments reflect a broader debate within the tech industry about AI governance and bias. His reference to Google’s Gemini model alludes to the February 2024 controversy when the AI image generator produced historically inaccurate depictions—including generating images of diverse Nazi soldiers and US founding fathers—leading Google to temporarily suspend the feature. The incident sparked intense discussion about overcorrection in AI training and the risks of embedding ideological preferences into foundational models.
The concerns Sacks raises resonate with ongoing debates about content moderation, algorithmic transparency, and government involvement in AI development. Critics of the Biden administration’s October 2023 Executive Order on AI argued it prioritized “AI safety” frameworks that could enable regulatory capture and censorship under the guise of preventing harm. Meanwhile, advocates for AI safety regulations argue that without government oversight, powerful AI systems could be deployed irresponsibly by private actors.
As AI systems increasingly mediate access to information—from search engines to chatbots to personalized news feeds—the question of who controls these systems and what values they encode becomes paramount. Sacks’s warning suggests that the battle over AI isn’t merely technical but fundamentally political: a struggle over who gets to shape the reality that AI systems present to billions of users worldwide.