The effects of the AI-driven coding boom are showing up in areas where one wouldn’t immediately expect.
GitHub, the world’s dominant code hosting platform, has been hit by a series of outages in recent months — and the root cause isn’t a cyberattack or a botched update. It’s the sheer weight of AI agents writing, pushing, and merging code at a scale the platform was never built to handle.

A Traffic Explosion Unlike Anything Before
The numbers are staggering. GitHub COO Kyle Daigle put it plainly in a recent post: “There were 1 billion commits in 2025. Now, it’s 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won’t.)” GitHub Actions, the platform’s automation engine, has gone from 500 million minutes per week in 2023 to 1 billion in 2025 — and has since hit 2.1 billion minutes in a single week.
This isn’t more developers signing up. It’s AI coding agents — tools like Claude Code, Cursor, and OpenAI Codex — autonomously cloning repositories, opening pull requests, running CI pipelines, and iterating at machine speed. AI-agent pull requests alone jumped from 4 million in September 2025 to 17 million in March 2026, a more than 4x increase in six months.
GitHub, website, and app data from early 2026 shows code pushes in the US growing 35% year-on-year — metrics that were essentially flat just two years prior.

The Infrastructure Cracks
GitHub’s VP Engineering Vlad Fedorov acknowledged in an April 28 blog post that the company began a plan to increase capacity by 10X in October 2025 — only to realize by February 2026 that they needed to design for 30X today’s scale. The company attributes this to a “rapid change in how software is being built,” noting that “since the second half of December 2025, agentic development workflows have accelerated sharply.”
The compounding nature of the problem is significant. As Fedorov wrote, a single pull request can touch “Git storage, mergeability checks, branch protection, GitHub Actions, search, notifications, permissions, webhooks, APIs, background jobs, caches, and databases.” At scale, he noted, “small inefficiencies compound: queues deepen, cache misses become database load, indexes fall behind, retries amplify traffic.”
February 2026 alone saw 37 separate platform incidents. In the first two days of April, GitHub logged five more as Copilot agent sessions hit resource exhaustion. Unofficial tracking put platform uptime below 90% at certain points — against an Enterprise SLA that promises 99.9%.
The Architectural Mismatch
The deeper issue is structural. AI coding agents interact with GitHub fundamentally differently from human developers. They don’t browse the UI — they hammer the API and CLI continuously, around the clock, across thousands of repositories simultaneously. A human developer on a free GitHub account generates a handful of commits a day. An AI agent on the same account can produce hundreds of commits, dozens of PRs, and thousands of Actions minutes in an afternoon.
GitHub’s pricing model hasn’t caught up. The platform’s generous free-tier API limits and Actions minutes were designed for human developers and occasional CI bots — not fleets of autonomous agents. Agent-specific rate limits and dedicated paid tiers appear inevitable; the question is whether they arrive proactively or only after more outages force the issue.
GitHub has also been simultaneously executing a migration to Azure, with a target of 50% of all traffic on Azure by July 2026. Running a major infrastructure migration alongside an AI-driven traffic explosion is, as one analysis put it, an infrastructure team “stretched thin.”
What GitHub Is Doing
Fedorov outlined a multi-pronged response: moving webhooks out of MySQL, redesigning the user session cache, reworking authentication flows to reduce database load, isolating critical services like Git and GitHub Actions from other workloads, and migrating performance-sensitive code from the Ruby monolith into Go. The company is also pursuing a multi-cloud path, describing it as necessary to “achieve the level of resilience, low latency, and flexibility that will be needed in the future.”
The platform has also updated its status page to include availability numbers and committed to logging all incidents — large and small — to give users clearer signals when something is wrong on GitHub’s end.
The priority order, Fedorov stated plainly: “availability first, then capacity, then new features.”
The Bigger Picture
GitHub has become the central nervous system of AI-driven software development. Every major coding agent routes its output through the platform. When it goes down, it doesn’t just slow developers — it stops entire AI-powered pipelines cold. The irony, as observers have noted, is that Git was designed from the start to be decentralized; the ecosystem built on top of it has become anything but.
GitHub’s outages are a leading indicator of a broader infrastructure reckoning. As AI agents write more and more code across the industry, the platforms that underpin software development — from version control to CI/CD to package registries — will need to rebuild themselves for a world where the “user” is often not a human at all.