Claude Code has been writing nearly 100% of its own code, but the leak of its codebase yesterday didn’t have anything to do with AI.
Yesterday, a routine update to Claude Code on the public npm registry accidentally included a 59.8 MB JavaScript source map file — an internal debugging artifact that pointed directly to a downloadable archive of the tool’s full source code. Within hours, the ~512,000-line TypeScript codebase was mirrored across GitHub and dissected by thousands of developers. The irony was hard to miss: a product written entirely by AI had been exposed by a very human mistake.

Boris Cherny, who created Claude Code at Anthropic, addressed the incident directly on X. “It was human error,” he said. “Our deploy process has a few manual steps, and we didn’t do one of the steps correctly. We have landed a few improvements and are digging in to add more sanity checks. Like with any other incident, the counter-intuitive answer is to solve the problem by finding ways to go faster, rather than introducing more process. In this case more automation & Claude checking the results.”
“This was a release packaging issue caused by human error, not a security breach. No sensitive customer data or credentials were involved or exposed,” he added.
Cherny was also pointed in his response about accountability. “Mistakes happen. As a team, the important thing is to recognize it’s never an individual’s fault — it’s the process, the culture, or the infra. In this case, there was a manual deploy step that should have been better automated. Our team has made a few improvements to the automation for next time, a couple more on the way.”
When asked whether anyone was fired over the incident, Cherny was unequivocal:
“No one was fired? It was an honest mistake, it happens.”
The philosophy — blame the system, not the person — is fairly standard in mature engineering cultures, and Cherny’s response leans into it cleanly. His proposed fix is also characteristically Anthropic: more automation, and Claude itself checking the results. The same AI whose codebase was just exposed will now be part of the safeguard against future exposure.
The leak itself had strategic consequences beyond embarrassment. The source code exposed unreleased feature flags, internal model codenames — including “Capybara” for a Claude 4.6 variant and the forthcoming “Numbat” — and details of an autonomous background agent mode called KAIROS. It also revealed an “Undercover Mode” designed to prevent Anthropic’s AI from leaving fingerprints on public open-source repositories: a system to stop Claude from leaking internal information, shipped inside a release that leaked internal information.
The timing made things worse. This was Anthropic’s second accidental exposure in a week, following a leaked draft blog post about its upcoming model. The company is also operating at a pace that strains its infrastructure. A user on X asked Cherny whether “human error” might also explain Claude’s frequent downtime lately. “Working around the clock to fix this,” Cherny replied. “It’s crazy to grow this fast, we’re scaling up every service as quickly as possible. Thanks for bearing with us. Improvements landing daily and more on the way,” he added.
Anthropic is reportedly running at a $19 billion annualized revenue run rate as of early 2026. Claude Code alone crossed $500 million in annualized revenue and saw usage grow more than tenfold in its first year. That kind of hypergrowth puts pressure on every layer of the stack, and manual deploy steps become liabilities fast.
Cherny’s broader point — that the answer to incidents is more speed through better automation, not more bureaucratic process — is a bet on the same tool that caused the problem in the first place. Given that Claude Code now writes 100% of its own code, that’s either the most logical solution available — or the most on-brand one.