GPT 5.2 In Cursor Built An Entire Browser In 1 Week Of Uninterrupted Use: Cursor CEO Michael Truell

It’s barely been a year since the term ‘vibe coding‘ was coined, but people are vibe coding entire browsers now.

In a development that seems to push the boundaries of what AI-assisted programming can achieve, Cursor CEO Michael Truell announced that his team used GPT-5.2 to build a functional web browser from scratch in just one week of uninterrupted operation. The result is a staggering 3 million-plus lines of code spanning thousands of files, all generated through the AI-powered coding platform.

cursor ceo michael truell

The browser represents a significant technical achievement in autonomous AI development. Rather than relying on existing rendering engines, the project implemented a from-scratch rendering engine in Rust, complete with HTML parsing, CSS cascade, layout algorithms, text shaping, paint functionality, and even a custom JavaScript virtual machine. According to Truell, while the browser “kind of works” and while it remains far from achieving parity with established engines like WebKit or Chromium, the team was “astonished that simple websites render quickly and largely correctly.”

The experiment demonstrates both the potential and current limitations of AI-driven software development. Creating a web browser is among the most complex software engineering challenges, requiring deep understanding of web standards, performance optimization, memory management, and countless edge cases. That an AI system could produce even a partially functional browser in a week suggests we’re entering new territory in automated code generation.

The project used OpenAI’s GPT-5.2 model which is now available in Cursor. The fact that the system ran “uninterrupted for one week” indicates a level of autonomous operation that goes beyond simple code completion, pointing toward more sophisticated AI agents capable of sustained, complex software development tasks. There are benchmarks which are measuring this — the METR Time Horizons benchmark, offers a more grounded way to measure such progress by determining the length of time it takes a human to complete a task that an AI can finish with a 50% success rate. Developed by the non-profit research institute METR (Model Evaluation and Threat Research), this approach provides a practical, real-world metric for AI capabilities grounded in “man-hours” rather than abstract test scores. By this measure, building even a partially functional browser in a week represents a significant compression of human effort—a task that would typically require a team of experienced engineers months or years to accomplish.

However, Truell’s candid acknowledgment that the browser “still has issues” and lacks feature parity with mature browsers underscores the gap between AI-generated code and production-ready software. Web browsers are the product of decades of refinement by teams of expert engineers, and while AI can now scaffold the basic architecture, the polish and reliability required for real-world use remain human domains for now. Whether this represents the future of programming or simply an impressive but ultimately impractical demonstration remains to be seen. What’s certain is that the line between human-written and AI-generated software continues to blur, and experiments like this one are mapping the contours of what’s possible in this rapidly evolving landscape.

Posted in AI