The AI wars are as much about technology as about rival companies looking to mark their territory.
Google Antigravity, the company’s AI-powered IDE launched in November 2025, has moved to block a significant portion of its user base following what it describes as widespread malicious misuse of its backend infrastructure. The crackdown has caught OpenClaw — the viral, open-source AI agent created by Peter Steinberger — squarely in its crosshairs. Just three days ago, Anthropic had similarly updated its terms to explicitly ban the use of OAuth tokens from Claude Free, Pro, or Max accounts in third-party tools like OpenClaw.

Varun Mohan, who leads Antigravity at Google, announced the restrictions in a public post, citing a “massive increase in malicious usage” that had “tremendously degraded the quality of service” for legitimate users.
“We’ve been seeing a massive increase in malicious usage of the Anitgravity backend that has tremendously degraded the quality of service for our users. We needed to find a path to quickly shut off access to these users that are not using the product as intended. We understand that a subset of these users were not aware that this was against our ToS and will get a path for them to come back on but we have limited capacity and want to be fair to our actual users,” he said on X.
Mohan was careful to clarify that only Antigravity access has been affected, leaving other Google and Google AI services untouched, and promised a path back for users who violated terms unknowingly. Still, the speed and breadth of the ban left many developers frustrated.
Steinberger, who joined OpenAI after creating OpenClaw, was blunt in his reaction. “Pretty draconian from Google,” he wrote. “Be careful out there if you use Antigravity. I guess I’ll remove support.” He pointedly contrasted Google’s approach with Anthropic’s, noting that Anthropic “pings me and is nice about issues. Google just… bans?” Anthropic had earlier asked Steinberger to change Clawdbot’s name because it was a play on their AI product, Claude.
A Pattern Emerging Across Labs
What makes the Antigravity ban particularly notable is that it didn’t happen in isolation. Just three days earlier, on February 20, 2026, Anthropic had updated its terms of service to explicitly prohibit the use of OAuth tokens from Claude Free, Pro, or Max accounts in third-party tools like OpenClaw. Anthropic cited “token arbitrage” and security risks, pointing to widespread unauthorized use of consumer subscriptions to power high-throughput autonomous agents — precisely the kind of workload OpenClaw is designed to run.
The timing of both actions, coming in quick succession from two of the industry’s biggest AI labs, has not gone unnoticed. OpenClaw was recently acquired by OpenAI, and many in the developer community are drawing a straight line between that acquisition and the sudden hostility from rival labs. The theory is straightforward: why help a competitor’s flagship product thrive on your infrastructure?
This would not be the first time such a dynamic has played out. When Windsurf, the AI coding assistant, was rumored to be in acquisition talks with OpenAI, Anthropic moved to cut off its models’ access to the product — a decision that had raised uncomfortable questions about how AI labs use model access as a competitive lever.
What Is OpenClaw?
OpenClaw — formerly known as Clawdbot and Moltbot — is a self-hosted, open-source AI agent that runs locally on a user’s machine and connects to messaging platforms like WhatsApp and Slack. Unlike conventional AI assistants, it doesn’t just respond to prompts; it executes real-world actions, browsing the web, managing files, sending emails, and scheduling tasks around the clock via a “heartbeat” system that keeps it running 24/7.
The project went viral for its ambition to function as a “digital employee,” powered by customizable AgentSkills add-ons that can handle everything from calendar management to, famously, car price negotiations. Because it runs locally, it keeps user data on-device — a privacy-first design that attracted a large and technically sophisticated user base. Security researchers, however, have flagged risks: an agent with deep system access and the ability to install third-party skills is a significant attack surface if misconfigured.
It is that same power — the ability to route high-volume, autonomous requests through AI backends — that appears to have made it a target for infrastructure crackdowns.
The Bigger Picture
For developers who have built workflows around OpenClaw, the message from the past week is stark: the AI lab ecosystem is fracturing along competitive lines, and third-party tools that sit in the middle are increasingly vulnerable. Whether you use Anthropic’s Claude or Google’s Gemini-powered Antigravity backend, the terms of access can change overnight.
Steinberger’s comment about Anthropic’s comparatively communicative approach may offer a small silver lining — but with both labs now having acted against OpenClaw within days of each other, the practical effect for users is the same regardless of style. The open-source agent that promised to give anyone a 24/7 AI employee is finding that after its OpenAI acquisition, its relationships with the major AI infrastructure providers are anything but guaranteed.