The ink was barely dry on Cloudflare’s announcement that it had rebuilt the most popular web framework in existence using AI — in one week, for $1,100 — but Vercel’s CEO has fired back.
On X, Guillermo Rauch — chief executive of Vercel, the company that created and maintains Next.js, the framework Cloudflare just reimplemented — announced that his team had identified seven security vulnerabilities in vinext, Cloudflare’s new AI-generated project: two rated critical, two high, two medium, and one low. He noted that the vulnerabilities had been responsibly disclosed to Cloudflare, and offered Vercel’s security and framework teams to help. Then he dropped the phrase that will define this story: “vibe-coded framework.”
Just hours earlier, Rauch had shared a guide on X titled “Migrate to Vercel from Cloudflare.”
What “Vibe Coding” Means — and Why It’s a Loaded Term
“Vibe coding” is a term that emerged in early 2025 to describe the practice of building software by describing what you want in natural language and letting an AI write the code — without deeply reviewing or fully understanding what it produces. The original framing was playful and honest: it’s fast, it feels productive, and it’s great for prototypes. It is also, increasingly, well-documented as a security liability.
Research published in January 2026 by security startup Tenzai found that applications built by five major AI coding agents — including Cursor, Claude Code, OpenAI Codex, Replit, and Devin — all contained significant vulnerabilities. The agents performed well on well-known vulnerability classes like SQL injection and cross-site scripting, but poorly on authorization logic and business logic. Common flaws included susceptibility to server-side request forgery, broken authentication flows, and missing security headers — the kinds of subtle mistakes that don’t show up immediately but create real exposure in production.
By labeling vinext a “vibe-coded framework,” Rauch is invoking all of that baggage, and applying it not to a weekend side project, but to infrastructure that Cloudflare is actively encouraging customers to run in production.
It’s a carefully chosen attack.
What Cloudflare Actually Built — and How
To understand whether the criticism is fair, it’s worth revisiting what Cloudflare actually did and how it did it. vinext is not a chatbot or an internal dashboard. It’s a reimplementation of Next.js — one of the most complex and widely used frameworks in web development — built on top of Vite, a different underlying build tool. The goal was to solve a real and longstanding problem: Next.js is tightly coupled to Vercel’s own infrastructure, making it painful to deploy on competing platforms like Cloudflare Workers.
One engineer directed AI through more than 800 coding sessions. The project has 1,700+ automated unit tests, 380 end-to-end browser tests, full TypeScript type checking, and continuous integration running every check on every code change. Many of the tests were ported directly from Next.js’s own test suite. The framework covers 94% of the Next.js API surface. It is already running in production on CIO.gov.
That is not what most people picture when they hear “vibe coding.” Rauch’s framing glosses over the structured, test-driven process Cloudflare used — and the meaningful distinction between AI-assisted engineering with rigorous guardrails versus unreviewed AI output shipped directly to users.
That said, the vulnerabilities are real. Seven is not a trivial number for a framework not yet a week old, and the two critical-rated flaws in particular demand a straightforward response from Cloudflare. An experimental label in a README does not protect production users.
A Corporate Rivalry With Deep Roots
It would be naive to read this episode purely as a public-interest security disclosure. The conflict between Cloudflare and Vercel has been building for years, rooted in a structural tension: Vercel makes money by being the best — and often the only fully-featured — place to deploy Next.js. Cloudflare, as one of the world’s largest network infrastructure providers, wants developers building on its platform instead.
This is not the first time the two companies have clashed over Next.js security. Last year, a vulnerability allowed bypassing of authentication at the middleware layer in Next.js. Cloudflare stepped in with a security fix after what it described as inadequate communication from Vercel, which led to their respective CEOs exchanging barbs publicly.
vinext, then, is not just a technical project. It is a direct challenge to Vercel’s business model. If developers can run a drop-in Next.js replacement on Cloudflare Workers with better build times, smaller bundles, and tighter platform integration, that is revenue leaving Vercel. The “Migrate to Vercel from Cloudflare” guide published alongside Rauch’s security post makes the strategic intent explicit: turn a security disclosure into a customer acquisition opportunity.
Whether or not that’s appropriate behavior — using a genuine security finding as competitive ammunition — is a judgment call. What’s undeniable is that both moves are happening at once, and the timing is not a coincidence.
The Real Question AI Has Now Created
The exchange between these two companies is a preview of a debate that is going to become much larger. AI is compressing the time and cost required to build complex software dramatically. A one-week, $1,100 framework is the beginning of that story, not the end. As those barriers fall, the question of what standards AI-generated code should be held to — and who is responsible when it fails — becomes urgent.
Traditional software development has no guarantees either. Human engineers introduce security vulnerabilities constantly; the history of the web is a long catalog of authentication bypasses, injection flaws, and logic errors written by professionals with years of experience. Indeed, a developer has said that they’d pointed out the same bug in next.js that Vercel discovered two years prior. The difference, critics argue, is that AI makes it possible to ship large amounts of unreviewed code very quickly, and that speed can outrun the security review process.
The deeper problem with vibe coding is not that AI is uniquely bad at writing secure code — it’s that it makes it possible for applications to be developed and deployed by people who may not recognize the vulnerabilities they’re shipping. Cloudflare’s engineer is not in that category. But the term, and the criticism, is going to follow AI-built software for a long time regardless.
For Cloudflare, the path forward is clear enough: fix the vulnerabilities, be transparent about the timeline, and lean into the fact that the responsible disclosure process worked as intended. For the broader industry, the message is more complicated: AI can build faster than ever, security review has to keep pace, and when it doesn’t, there will always be a competitor ready to point it out — with a migration guide already loaded and waiting.