The morning after something like this, a tech company experiences a certain kind of silence. Not the quiet of serenity, but the quiet that precedes awkward questions being asked in cramped spaces. The entire internal source code of Claude Code was inadvertently released to the public npm registry on March 31, 2026, by Anthropic, the AI safety company that has spent years establishing a reputation as the calm, responsible voice in a careless industry. Not a hack.
No outside interference. All it took to see 512,000 lines of proprietary TypeScript was a single missing line in a build configuration file.
| Company Profile: Anthropic | |
|---|---|
| Founded | 2021 |
| Headquarters | San Francisco, California |
| Founders | Dario Amodei, Daniela Amodei, and former OpenAI researchers |
| Valuation | Approximately $61 billion (2024) |
| Primary Product | Claude AI — conversational AI and coding assistant |
| Incident Date | March 31, 2026 |
| What Leaked | 1,900 files, 512,000 lines of Claude Code TypeScript source |
| Leak Cause | Missing build configuration entry; human error |
| Package Affected | @anthropic-ai/claude-code v2.1.88 on npm registry |
| Company Focus | AI safety and responsible development |
| Prior Leak | “Mythos” model specification leaked earlier the same week |
| Paid Subscriber Growth | More than doubled in 2026, per Anthropic spokesperson |
It’s the kind of error that seems unattainable until it occurs. At 4:23 AM Eastern Time, Solayer Labs researcher Chaofan Shou discovered the exposed source map and shared it on X. By the time the majority of Anthropic’s engineers had finished their morning coffee, mirrors were proliferating on GitHub, the file had been widely downloaded, and the post announcing the leak had received over 29 million views.
After removing the package, Anthropic released a clean version. However, the code had already been read, copied, and distributed. The internet doesn’t have to forget and has a long memory.

According to Anthropic’s official statement, the incident was not a security breach but rather a release packaging problem brought on by human error. Technically, that framing is correct. No user information was compromised. Model weights did not escape.
There was no compromise of credentials. However, because there was more than just plumbing in that source map, that limited definition of harm ignores what actually leaked. It was a tactic. It was a roadmap for the product. Anthropic had made these internal engineering decisions with great care and had decided not to disclose them to the public until now.
What the code refers to as “undercover mode” was the feature that attracted the most attention. Claude Code is set up to remove any indication that contributions are AI-generated when it is used outside of Anthropic’s internal repositories. Internal model codenames or any signal pointing back to an AI author are not allowed in commit messages, pull requests, or code comments.
Practically speaking, Anthropic has been contributing to public open-source projects using Claude Code while purposefully keeping its AI origins anonymous. The open-source community, which values transparency greatly, is currently discussing how to interpret that. It is important to note that prior to the leak forcing the discussion, Anthropic had not disclosed this practice.
Additionally, the code contains a competitive defense mechanism that is equally striking but less unexpected. When a particular internal flag is active, an anti-distillation system introduces fictitious and non-working tool definitions into the model’s perception. The goal is to train a competing model by tainting the training data of anyone who intercepts Claude Code’s API traffic.
A rival’s model learns to call tools that just don’t exist if they record those sessions and train on them. It’s smart engineering, and it’s precisely the kind of internal competitive choice that businesses develop covertly and would rather never publicly discuss.
Additionally, 44 feature flags pertaining to fully developed but unreleased capabilities are included in the leaked source. One of them, known as KAIROS, is a background daemon that runs continuously while the developer is not using it. It processes memory, keeps an eye on files, and operates independently. Another, COORDINATOR MODE, enables the creation and management of parallel worker agents by a single instance of Claude Code.
For ten to thirty minutes at a time, ULTRAPLAN conducts asynchronous long-horizon planning sessions in the background. Then there is BUDDY, a terminal companion pet that comes in eighteen species, one of which is, somewhat predictably, a capybara. All of this had not been disclosed. Competitors now have a thorough understanding of Anthropic’s product’s future direction, provided well in advance of any formal launch.
The timing makes this more difficult to ignore. This was Anthropic’s second unintentional revelation in a span of seven days. A draft blog post about an upcoming model called “Mythos” that was part of an internal philosophical document had also been made public earlier that week through systems that were ostensibly accessible when they shouldn’t have been.
A single leak is a bad day. A pattern of two in a single week begins to emerge. It raises valid concerns about code review and internal release procedures, as well as what else might be just one configuration error away from unwanted exposure.
Being cautious has been the cornerstone of Anthropic’s entire public persona. A hundred-page model specification about responsible AI development was released by the company. When the Pentagon voiced concerns about autonomous weapons, Dario Amodei famously refused to back down. The brand is the measured counterpoint to the fast-paced culture of caution and deliberateness.
This stance is currently navigating a situation in which a routine npm update unintentionally gave rivals a thorough understanding of the company’s internal architecture.
This larger lesson is not exclusive to Anthropic. Standard package registries like npm and PyPI are being used by AI companies to ship increasingly complex production software, and in many cases, the internal release discipline has not kept up with the deployment speed. Common software development failure modes include source maps, debug artifacts, and configuration files with more defaults than intended.
AI companies differ in that their code has far more strategic weight than regular software. A source map from an AI lab does more than simply show how something operates. It shows the company’s direction, what it believes is important to safeguard, and occasionally what it would rather keep under wraps.
There’s a sense that this week in San Francisco will be remembered by the industry as the time when even the most meticulous lab in the industry showed how narrow the margin for error really is. The registry no longer contains the code. The world still has it. Furthermore, it will take a lot longer to address the issues it raised, such as undisclosed product plans, competitive sabotage systems, and undercover contributions.
