Amazon Requires Senior Engineers To Sign Off On AI-Assisted Changes Made By Junior And Mid-level Engineers After AI-Related Outage

Even as AI is being deployed widely across startups and corporations, some companies are deciding that there needs to be more oversight of AI-assisted coding.

Amazon has internally announced that junior and mid-level engineers will now need senior engineers to sign off on any changes made with the assistance of AI coding tools, FT reports. The move follows a an outage which had reportedly been caused in part by its own AI coding tool.

According to a report by the Financial Times, Amazon’s ecommerce business summoned a large group of engineers to a meeting to conduct a “deep dive” into a spate of recent outages. A briefing note for the meeting cited a “high blast radius” and “Gen-AI assisted changes” among the contributing factors. Under “contributing factors,” the note also listed “novel GenAI usage for which best practices and safeguards are not yet fully established.” Dave Treadwell, a senior vice-president at the company, reportedly told engineers that the availability of the site and related infrastructure “has not been good recently” — and that the new sign-off requirement for junior and mid-level engineers on AI-assisted changes would be mandatory going forward.

A Pattern of Incidents, Not a One-Off

The ecommerce outage review didn’t happen in a vacuum. It comes on the heels of a incidents at Amazon Web Services, where at least two production outages have been linked to AI coding tools.

The most significant of these occurred in mid-December 2025, when AWS engineers allowed the company’s Kiro AI coding assistant — an agentic tool capable of taking autonomous actions on behalf of users — to make changes to a live system. Kiro determined that the best course of action was to delete and recreate the entire environment, triggering a 13-hour service interruption affecting AWS Cost Explorer in one of the company’s mainland China regions. A second, separate incident has also been linked to AI tool errors, this time involving Amazon Q Developer, though Amazon maintained that it did not affect any customer-facing AWS services.

Those outages were significant enough to prompt some AWS employees to raise internal doubts about the pace at which the company was rolling out AI coding assistants — particularly agentic ones capable of acting with minimal human oversight. One senior AWS employee described both incidents to the Financial Times as “small but entirely foreseeable.”

Amazon Pushes Back — But Quietly Tightens Controls

Amazon has not conceded that the AI tools themselves were at fault. The company characterized the December AWS incident as a “coincidence that AI tools were involved,” arguing that the same issues could have occurred with any developer tool or manual action. “In both instances, this was user error, not AI error,” the company said, noting that the engineer involved had “broader permissions than expected — a user access control issue, not an AI autonomy issue.”

By default, Kiro requests human authorization before taking any action. In this case, however, the AI was treated as an extension of the operator and inherited that engineer’s elevated permissions, bypassing the standard two-person sign-off requirement. The result was a 13-hour disruption.

Amazon framed the review of website availability as “part of normal business” and said it aims for continual improvement. But the new mandatory sign-off policy for junior and mid-level engineers tells a more cautious story underneath that messaging.

The Broader Question the Industry Needs to Ask

Amazon is not alone in grappling with this tension. AI agents are now writing more than 50% of code at Coinbase, Google has said more than 30% of its code is written by AI, and companies from Salesforce to Klarna have paused software engineer hiring entirely, citing productivity gains from AI. Some, like Gumroad’s founder, have gone further still — saying they’re no longer hiring even senior software engineers because AI has replaced the need.

Meanwhile, Amazon itself has said it expects its corporate workforce to shrink over the coming years due to AI-driven efficiencies — and followed through on that with a round of 14,000 corporate layoffs, citing AI as the most transformative technology since the internet.

The irony is not lost. A company aggressively cutting headcount and banking on AI to do more of the work is simultaneously discovering that AI-assisted work, without proper human oversight, can bring down production systems.

The question is not simply whether AI tools make more mistakes than humans. As we noted in our earlier coverage of the AWS outages, the more important question is whether the nature of those mistakes is fundamentally harder to predict and contain. A developer typo is a known category of failure. An AI agent that confidently and autonomously decides to delete and recreate a live production environment is a different category entirely.

Amazon’s new sign-off requirement is a pragmatic short-term fix. It imposes a human check on the most likely source of unchecked AI action — the less experienced engineers who may not fully grasp the downstream consequences of what they’re approving. But it also raises a deeper question about the governance model the industry is building toward. If agentic AI tools — at least for now — need senior engineers to supervise junior engineers using them, that’s not exactly the frictionless productivity revolution that the sales pitches promise.

Posted in AI