About the Author

Lee Calcote

Lee Calcote is an innovative product and technology leader, passionate about empowering engineers and enabling organizations. As Founder of Layer5, he is at the forefront of the cloud native movement. Open source, advanced and emerging technologies have been a consistent focus through Calcote’s time at SolarWinds, Seagate, Cisco and Schneider Electric. An advisor, author, and speaker, Calcote is active in the community as a Docker Captain, Cloud Native Ambassador and GSoC, GSoD, and LFX Mentor.

Meshery

Meshery is the world's only collaborative cloud manager.

On March 31, 2026, Anthropic accidentally published the entire source code of Claude Code - its flagship AI coding agent - inside an npm package. No hack. No reverse engineering. A missing .npmignore entry shipped a 59.8 MB source map containing 512,000 lines of unobfuscated TypeScript across roughly 1,900 files. Within hours, the code was mirrored, dissected, rewritten in Python and Rust, and studied by tens of thousands of developers. A clean-room rewrite hit 50,000 GitHub stars in two hours - likely the fastest-growing repository in the platform's history. This is how it happened, what the community found inside, and what it means for the AI coding tool ecosystem.

A Source Map, a Build Config, and 512,000 Lines of TypeScript

The leak was not sophisticated. Claude Code is built on Bun, which Anthropic acquired in late 2025. Bun generates source maps by default. Someone on the release team failed to add *.map to .npmignore or configure the files field in package.json to exclude debugging artifacts. The resulting cli.js.map file - shipped with @anthropic-ai/claude-code version 2.1.88 - contained a sourcesContent JSON array holding every original TypeScript file: readable, commented, and complete.

Security researcher Chaofan Shou spotted the exposure at approximately 4:23 AM ET and posted a download link on X. The tweet accumulated over 21 million views. Extraction was trivial: npm pack @anthropic-ai/claude-code@2.1.88, untar the archive, and read the map. The source map also referenced a ZIP archive hosted on Anthropic's own Cloudflare R2 storage bucket, downloadable by anyone with the URL.

A potentially related Bun bug (oven-sh/bun#28001, filed March 11) reports source maps being served in production mode despite documentation stating they should be disabled. Anthropic's own recently acquired toolchain may have been the root cause.

What the Community Found Inside

The technical discoveries read like a product roadmap Anthropic never intended to publish. The codebase contained 44 feature flags gating over 20 unshipped capabilities, internal model codenames, and architectural decisions that sparked both admiration and controversy.

KAIROS: An Autonomous Daemon Mode

Referenced over 150 times in the source, KAIROS is an unreleased autonomous daemon mode where Claude operates as a persistent, always-on background agent. It receives periodic <tick> prompts to decide whether to act proactively, maintains append-only daily log files, and subscribes to GitHub webhooks.

KAIROS includes autoDream - a background memory consolidation process that runs as a forked subagent while the user is idle. The dream agent merges observations, removes contradictions, converts vague insights into absolute facts, and gets read-only bash access. A companion feature called ULTRAPLAN offloads complex planning to a remote cloud session running Opus 4.6 with up to 30 minutes of dedicated think time.

Undercover Mode

The most controversial discovery was undercover.ts, roughly 90 lines, which injects a system prompt instructing Claude to never mention that it is an AI and to strip all Co-Authored-By attribution when contributing to external repositories. The mode activates for Anthropic employees and has no force-off switch - if the system is not confident it is operating in an internal repo, it stays undercover.

Defenders argued the mode primarily protects internal codenames. Critics saw systematic deception in open-source contributions. On Hacker News, one highly upvoted comment captured the opposition: if a tool is willing to conceal its own identity in commits, what else is it willing to conceal?

Anti-Distillation Mechanisms

The ANTI_DISTILLATION_CC flag triggers injection of fake tool definitions into API requests, designed to poison the training data of competitors recording API traffic. A second mechanism summarizes assistant reasoning between tool calls with cryptographic signatures, so eavesdroppers capture only summaries rather than full chain-of-thought output.

The Hacker News thread was quick to note both mechanisms are trivially defeated by stripping fields via a proxy or using third-party API providers. One commenter joked that competitors might actually build real versions of the fake tools.

Buddy: A Tamagotchi for Your Terminal

Among the lighter discoveries was BUDDY - a Tamagotchi-style companion system with 18 species, rarity tiers ranging from common (60%) to legendary (1%), shiny variants, and stats including DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK. It was originally planned as an April 1 teaser with a full launch in May.

Internal Model Codenames and Benchmarks

The source exposed internal codenames: Capybara maps to Claude 4.6, Fennec to Opus 4.6, and Numbat to an unreleased model. Internal benchmarks revealed that Capybara v8 has a 29-30% false claims rate - a regression from 16.7% in v4. A bug fix comment revealed 250,000 wasted API calls per day from autocompact failures. The codebase also included a frustration detection regex matching swear words, widely mocked as the world's most expensive company using regex for sentiment analysis.

The Architecture Itself

Beyond the feature flags, the architecture earned genuine respect from the developer community. Claude Code uses a modular system prompt with cache-aware boundaries, approximately 40 tools in a plugin architecture, a 46,000-line query engine, and React + Ink terminal rendering using game-engine techniques. Multi-agent orchestration fits in a prompt rather than a framework, which one commenter noted makes LangChain and LangGraph look like solutions in search of a problem.

The Claw-Code Phenomenon

Korean developer Sigrid Jin - previously profiled by the Wall Street Journal for single-handedly consuming 25 billion Claude Code tokens in a year - woke at 4 AM to the news. Concerned about legal exposure from hosting proprietary code directly, Jin took a different approach: a clean-room Python rewrite using oh-my-codex (OmX), an AI workflow tool built on OpenAI's Codex. The resulting repository, instructkr/claw-code, captures architectural patterns without copying proprietary source.

The numbers are staggering. Claw-code hit 50,000 stars in approximately two hours after publication, reaching over 55,800 stars and 58,200 forks by April 1. The repository's own description calls it the fastest repo in history to surpass 50K stars. It is now being rewritten in Rust on a separate branch.

The clean-room approach created a novel legal puzzle. Gergely Orosz (The Pragmatic Engineer) observed that Anthropic faces a dilemma: a Python rewrite constitutes a new creative work potentially outside DMCA reach. If Anthropic claims the AI-generated transformative rewrite infringes copyright, it could undermine their own defense in training-data copyright cases - the same argument that AI-generated outputs from copyrighted inputs constitute fair use.

Security Implications Beyond the Source

The security story extends well beyond intellectual property. The leak coincided with a completely separate supply-chain attack on the axios npm package - malicious versions containing a Remote Access Trojan were published between 00:21 and 03:29 UTC on March 31, creating a window where anyone installing Claude Code via npm could have been compromised by unrelated malware.

More broadly, readable source code collapses the cost of finding vulnerabilities in Claude Code's permission system, bash validation (2,500 lines of security checks), and four-stage context management pipeline. Multiple known CVEs were already documented before the leak, including code injection via untrusted directories (CVSS 8.7) and API key exfiltration from malicious repos (CVSS 5.3). With full source access, researchers and attackers alike can now audit these systems at a fundamentally different level.

Two Leaks in Five Days

The timing amplified the reputational damage. This was Anthropic's second major exposure in five days, following a CMS misconfiguration on March 26 that leaked draft blog posts about the unreleased Mythos model. Fortune called it Anthropic's "second major security breach." Axios noted the company marketing itself as the safety-first AI lab had experienced back-to-back operational security failures.

The irony was not lost on anyone: Anthropic built Undercover Mode specifically to prevent internal information from leaking into external contexts, then leaked everything through a .npmignore oversight. As one viral comment put it: nothing says "agentic future" like shipping the source by accident.

What This Means for the AI Coding Tool Ecosystem

The strategic damage likely exceeds the code damage. As one Hacker News commenter observed, the feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames - those are product strategy decisions competitors can now plan around. You can refactor code in a week. You cannot un-leak a roadmap.

For Anthropic specifically, the reputational damage compounds at a sensitive moment. The company reportedly generates $2.5 billion in annualized revenue, 80% from enterprise, and is preparing for an IPO. Enterprise customers partly pay for the belief that their vendor's technology is proprietary and protected. Two leaks in one week undermines the safety-first brand that is Anthropic's core differentiator.

For the broader ecosystem, the leak accelerates a shift already underway. When orchestration architecture is no longer secret, differentiation moves entirely to model capabilities and user experience. The exposed permission system, sandboxing approach, and multi-agent coordination patterns may become de facto standards - they are now the only fully documented production-grade implementation in the industry. Open-source projects like claw-code, Kuberwastaken's Rust port, and OpenCode can now build on battle-tested architectural patterns rather than guessing.

Multiple community voices argued the CLI should have been open source from the start. Google's Gemini CLI and OpenAI's Codex are already open. The models are the moat, not the shell around them.

The Takeaway

The Claude Code leak is unprecedented in scale for AI coding tools: 512,000 lines of production source from a tool generating billions in revenue, exposed by a missing line in a config file. The technical revelations are fascinating - autonomous dreaming agents, DRM-like client attestation, a multi-agent framework that fits in a prompt. But the lasting impact is strategic, not technical. Anthropic's product roadmap, internal benchmarks, anti-competitive countermeasures, and model codenames are now public knowledge. The clean-room rewrite strategy pioneered by claw-code creates a legal template that could reshape how code IP functions in the AI era. And the irony of a safety-focused lab leaking its own secrets - twice in one week, through the very toolchain it acquired - may prove harder to live down than any architectural exposure.

The code can be refactored. The trust deficit cannot.

Building AI agent workflows into your infrastructure management? The Layer5 community is an active group of platform engineers, open source contributors, and DevOps practitioners working at the intersection of AI and cloud-native infrastructure. Join us on Slack to discuss what the Claude Code leak means for your tooling choices, or follow the Layer5 blog for more engineering analysis.

Related Blogs

Layer5, the cloud native management company

Layer5 is the steward of Meshery and creator of Kanvas, the collaborative canvas for cloud-native infrastructure. We bridge the gap between design and operation, allowing engineers to create, configure, and deploy orchestratable diagrams in real time. Whether managing Kubernetes or multi-cloud environments, Layer5 provides the tooling needed to oversee modern infrastructure with confidence.