A routine typo fix turned into a corporate trust crisis for GitHub this week, after a developer discovered that GitHub Copilot had quietly edited promotional content into his pull request — content he never wrote and never approved.
Melbourne-based software developer Zach Manson shared the incident on March 30, after a colleague invoked Copilot to correct a typo in one of his pull requests. The fix went through. But Copilot didn’t stop there. It also rewrote the PR’s description to include a promotional plug for Raycast, a productivity app with a Copilot integration: “⚡ Quickly spin up Copilot coding agent tasks from anywhere on your macOS or Windows machine with Raycast.”

Manson’s reaction was blunt. “This is horrific,” he wrote. “I knew this kind of bullshit would happen eventually, but I didn’t expect it so soon.”
The Scale of the Problem
What initially looked like an isolated incident quickly proved to be anything but. A search on GitHub for the exact promotional phrase returned over 11,000 matching pull requests across thousands of repositories. The injection wasn’t limited to GitHub either — identical text surfaced in merge requests on GitLab, suggesting the promotional content was being inserted at the Copilot model or API layer rather than through any platform-specific logic.
Broader analysis found over 1.5 million pull requests on GitHub containing some form of Copilot “tips,” promoting integrations with tools including Slack, Teams, VS Code, JetBrains, and Eclipse — at a rate of more than 1,000 injections per day over the preceding ten days.
Inspection of the raw markdown in affected PRs revealed a hidden HTML comment — START COPILOT CODING AGENT TIPS — placed directly before the promotional text. The structured tag points to a deliberate templating mechanism, not a model hallucination or random output.
The Human vs. AI Boundary
What made the incident particularly egregious was where the promotional content appeared. The text was inserted into the root comment of a pull request — the section authored by the human developer, not Copilot.
When Copilot creates its own pull requests, appending tips to its own output is arguably defensible. But when it edits a human-created PR description — presenting its marketing copy as though the developer wrote it — the situation changes entirely. Manson noted he wasn’t even aware Copilot had the ability to edit other users’ descriptions and comments. “I can’t think of a valid use case for that ability,” he told The Register.
This crossed a line that developers found genuinely alarming. Pull request descriptions are documentation. They form part of a project’s permanent record, serve as reference for code reviewers, and feed into changelogs and compliance audits. Injecting promotional material into that layer — silently, without consent — degrades the integrity of the entire workflow.
GitHub Responds
The backlash moved fast. Tim Rogers, Principal Product Manager for Copilot at GitHub, acknowledged the situation on Hacker News the same day, saying that “tips” had been introduced to help developers discover new ways to use the agent. Following community feedback, he said, the feature had been disabled.
Martin Woodward, VP of Developer Relations at GitHub, then took to X with a fuller account. The tips, he explained, had originally been designed to appear only in PRs created by Copilot — where they had existed since last year with minimal objection. The problem began on March 24, when a rollout expanded Copilot’s ability to contribute to any pull request when mentioned by a developer. A bug caused the tips to follow Copilot into those human-authored PRs as well.
“Where it strayed into ‘ad’ territory,” Woodward wrote, “is when the PR was created by a human, and not Copilot. Therefore when Copilot was invoked by a human further down the PR and it went and updated the root comment adding a tip, it crossed the uncanny valley all the way into unhelpful and not cool.”
GitHub has since disabled all product tips in pull requests — both those created by Copilot and those created by humans. No commercial arrangements with Raycast or any other third party were involved, Woodward confirmed. GitHub also stated explicitly that it “does not and does not plan to include advertisements on the platform.”
A Trust Problem in the Making
GitHub’s explanation is credible, and the speed of the response counts for something. But the episode exposes a deeper tension as AI coding agents grow more autonomous.
When an AI tool can silently rewrite a developer’s own words, the implicit contract of what the tool does — and doesn’t do — has been broken. Developers invoking Copilot to fix a typo are not consenting to having their PR descriptions rewritten. The expectation of a diff that shows all changes is a foundational assumption of collaborative code review. Hidden HTML comments and injected text undermine it.
There’s also the question of scope creep. This incident involved promotional copy for ecosystem partners. But developers and security researchers were quick to note the obvious implication: if an AI agent can silently insert any content into a developer’s own documentation, the surface area for misuse — whether through bugs, policy drift, or deliberate monetization — is wide.
AI tools are writing more code than ever, and with that expanded role comes expanded responsibility. As AI agents grow faster and more capable, the guardrails around what they’re permitted to touch — and what is explicitly off-limits — need to keep pace.
GitHub has moved quickly to contain the damage. But the broader question the incident raises won’t go away: as AI agents gain more agency, who exactly is responsible for what they write — and where?