OpenClaw users are realizing that more and more AI model providers are restricting their usage for their agents.
Z.AI, the Chinese AI company whose GLM models power a range of coding tools including Claude Code, Kilo Code, Cline, and OpenClaw, has updated its usage policy to crack down on subscribers who use their coding plan subscriptions for non-coding purposes. The change is drawing attention in developer circles — and a pointed response from OpenClaw creator Peter Steinberger.
What Changed
Z.AI’s GLM Coding Plan is marketed as a subsidized subscription for AI-assisted development. The updated policy makes explicit what was previously implicit: the plan is for coding workflows only. If Z.AI’s systems detect use outside those scenarios, subscribers now face aggressive temporary throttling. Three or more violations can result in a permanent account ban. Users have been reporting a wave of 1302 and 1303 rate limit errors tied directly to the crackdown.
The policy warning shown to affected users is blunt: “Violating the Usage Rules three or more times will result in an account ban.”

Steinberger Weighs In
Peter Steinberger, who built OpenClaw before joining OpenAI to lead personal agents development, shared the policy update on X with a terse observation: “Interesting shift. These highly subsidized subs are out there to get your code to improve their models. If you use AI for things useful to you, but not code, you are not valuable to them.”
Steinberger seems to be suggesting that coding plan subscriptions are priced aggressively because they serve a dual purpose for AI labs: delivering a useful tool to developers while generating the kind of high-quality, structured code interactions that improve model training. Non-coding usage — browsing, email triage, general Q&A — still consumes compute but delivers far less valuable training signal.
A Familiar Pattern
Z.AI’s move follows a wave of similar restrictions from larger players. Anthropic moved first, implementing client fingerprinting to block OpenClaw and third-party tools from using Claude Pro and Max subscriptions, citing unsustainable resource consumption and the inability to optimize for third-party harnesses the way it does for its own tools. Google’s Antigravity followed days later, citing “massive increases in malicious usage” degrading service quality for legitimate users.
Z.AI’s enforcement takes a different approach — rather than blocking specific tools outright, it targets the nature of the usage itself. The effect for OpenClaw users is similar: a flat-rate subscription that once powered a wide range of autonomous agent tasks is now actively policed for scope.
The Economics Behind It
The logic is straightforward and increasingly shared across the industry. Subsidized coding subscriptions make economic sense only when usage generates returns — either through model improvement or by keeping developers inside a lab’s ecosystem. An OpenClaw instance using a GLM coding plan to manage a calendar or draft emails produces neither. It simply consumes compute at a loss.
Steinberger’s framing on X captures what many developers had quietly understood: the cheap access was never really about the user’s convenience. It was about what the user’s activity was worth to the provider. When that alignment breaks down, the terms change.
What distinguishes Z.AI’s approach is the graduated enforcement model — throttling before banning, with a stated path for appeals. That is a softer landing than the near-immediate hard bans seen elsewhere, but the destination is the same: AI labs are drawing lines around what their subsidized plans are actually for, and autonomous general-purpose agent use is increasingly on the wrong side of those lines.
For OpenClaw users who built workflows on cheap coding plan access, the window is narrowing across every major provider.