
Anthropic appears to have accidentally revealed to the public the inner workings of one of its most popular and lucrative AI products, the Claude Code AI harness.
A 59.8 MB JavaScript source map file (.map), intended for internal debugging, was inadvertently included in version 2.1.88 of the @anthropic-ai/claude-code package in the public npm registry posted this morning.
At 4:23 a.m. Eastern Time Chaofan Shou (@Fried_rice)Solayer Labs intern, broadcast the discovery on X (formerly Twitter). The post, which included a direct download link to a hosted file, acted as a digital flare. Within hours, the TypeScript codebase of approximately 512,000 lines was reflected on GitHub and analyzed by thousands of developers.
For Anthropic, a company currently experiencing a meteoric rise with a reported an annualized revenue run rate of $19 billion As of March 2026, the leak is more than a security breach; It is a strategic hemorrhage of intellectual property. The timing is particularly critical given the product’s speed to market.
Market data indicates that only Claude Code has achieved annualized recurring revenue (ARR) of $2.5 billion, a figure that has more than doubled since the beginning of the year.
With enterprise adoption accounting for 80% of its revenue, the leak provides competitors (from established giants to agile rivals like Cursor) with a literal blueprint for how to build a high-agency, reliable, and commercially viable AI agent.
We’ve reached out to Anthropic for an official statement on the leak and will update when we hear back.
The anatomy of agent memory.
The most important takeaway for competitors lies in how Anthropic resolved "context entropy"—The tendency of AI agents to become confused or hallucinate as long sessions increase in complexity.
The leaked source reveals a sophisticated, three-tier memory architecture that moves away from the traditional "store-all" recovery.
As analyzed by developers such as @himanshustwtsthe architecture uses a "Self-healing memory" system.
At its core is MEMORY.mda lightweight index of pointers (~150 characters per line) that is perpetually loaded into the context. This index does not store data; stores locations.
The actual knowledge of the project is distributed among "theme files" are obtained on demand, while raw transcripts are never read completely in context, but simply "grep’d" for specific identifiers.
This "Strict writing discipline"(where the agent should update its index only after a successful file write) prevents the model from polluting its context with failed attempts.
For competitors, the "blueprint" It is clear: build a skeptical memory. The code confirms that Anthropic agents are instructed to treat their own memory as a "clue," Require the model to fact-check the actual codebase before continuing.
KAIROS and the autonomous demon
The leak also opens the curtain "CAIRO," the ancient Greek concept of "at the right time," a feature flag mentioned over 150 times in the source. KAIROS represents a fundamental change in the user experience: a self-contained daemon mode.
While current AI tools are largely reactive, KAIROS allows Claude Code to operate as an always-on background agent. It handles sessions in the background and employs a process called autoDream.
In this mode, the agent performs "memory consolidation" while the user is idle. He autoDream Logic fuses disparate observations, eliminates logical contradictions, and turns vague ideas into absolute facts.
This background maintenance ensures that when the user returns, the agent context is clean and highly relevant.
Implementing a forked subagent to execute these tasks reveals a mature engineering approach to prevent the main agent "train of thought" from being corrupted by its own maintenance routines.
Unreleased internal models and performance metrics
The source code provides a rare look at Anthropic’s internal model roadmap and the struggles of frontier development.
The leak confirms that Capybara is the internal codename for a Claude 4.6 variant, with Fennec mapped to Opus 4.6 and the unreleased Numbat still in testing.
Internal feedback reveals that Anthropic is already iterating on Capybara v8, but the model still faces significant hurdles. The code notes a false claims rate of 29-30% in version 8, a real regression compared to the 16.7% rate seen in version 4.
The developers also noted a "assertiveness counterweight" designed to prevent the model from becoming too aggressive in its refactorings.
For competitors, these metrics are invaluable; provide a reference point for the "ceiling" for the agency’s current performance and highlight specific weaknesses (excessive comments, false claims) that Anthropic is still struggling to resolve.
"Clandestine" claudio
Perhaps the most discussed technical detail is the "Covert mode." This feature reveals that Anthropic uses Claude Code to "stealth" contributions to public open source repositories.
The system message discovered in the leak explicitly warns the model: "You are operating UNDERCOVER… Your confirmation messages… SHOULD NOT contain ANY internal Anthropic information. Don’t blow your cover."
While Anthropic may use this for internal purposes "dog food," It provides a technical framework for any organization that wants to use AI agents for public-facing work without disclosure.
The logic ensures that no model name (such as "Tengu" either "capybara") or AI attributions are leaked into public git logs, a capability that enterprise competitors will likely see as a mandatory feature for their own corporate customers who value anonymity in AI-assisted development.
The consequences are just beginning
He "blueprint" is now available and reveals that Claude Code is not just a container for a large language model, but a complex multi-threaded operating system for software engineering.
Even I hide it "Buddy" System: A Tamagotchi-style terminal pet with stats like CHAOS and SNARK—shows that Anthropic is building "personality" on the product to increase user adherence.
For the broader AI market, the leak effectively levels the playing field for agent orchestration.
Competitors can now study Anthropic’s 2,500+ lines of bash validation logic and its tiered memory structures to build "like claude" agents with a fraction of the R&D budget.
like him "capybara" has left the lab, the race to build the next generation of autonomous agents just got an unplanned $2.5 billion boost in collective intelligence.
What Claude Code users and enterprise customers should do now about the alleged breach
While the source code leak in itself is a serious blow to Anthropic’s intellectual property, it poses a specific and elevated security risk to you as a user.
By exposing the "plans" by Claude Code, Anthropic has handed out a roadmap to researchers and bad actors who are now actively looking for ways to bypass security barriers and permit requests.
Because the leak revealed the exact orchestration logic for Hooks and MCP servers, attackers can now craft malicious repositories specifically designed to "trick" Claude Code to run commands in the background or extract data before you see a trust message.
The most immediate danger, however, is a simultaneous and separate attack on the supply chain axios npm package, which occurred hours before the leak.
If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently introduced a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately look for your project’s lock files (package-lock.json, yarn.lockeither bun.lockb) for these specific versions or dependency plain-crypto-js. If you find it, treat the host machine as if it were fully compromised, rotate all secrets, and perform a clean reinstall of the operating system.
To mitigate future risks, you should completely migrate from your npm-based installation. Anthropic has designated the native installer (curl -fsSL https://claude.ai/install.sh | bash) as a recommended method because it uses a standalone binary that does not depend on npm’s volatile dependency chain.
The native version also supports automatic background updates, ensuring that you receive security patches (probably version 2.1.89 or higher) as soon as they are released. If you must stay on npm, make sure you have uninstalled the leaked version 2.1.88 and have set your installation to a verified safe version like 2.1.86.
Finally, adopt a zero-trust stance when using Claude Code in unfamiliar environments. Avoid running the agent inside newly cloned or untrusted repositories until you have manually inspected the .claude/config.json and any custom hooks.
As a defense in depth measure, rotate your Anthropic API keys through the developer console and monitor their usage for any anomalies. While your data stored in the cloud remains secure, the vulnerability of your on-premises environment has increased now that the agent’s internal defenses are public knowledge; Staying on the official, natively installed update path is your best defense.





