Anthropic accidentally exposes half a million lines of Claude Code

Nearly 1,900 files temporarily accessible online, revealing key insights into the AI coding assistant.

Anthropic accidentally exposes half a million lines of Claude Code
Anthropic accidentally exposes half a million lines of Claude Code

In a startling turn of events shaking the AI community, Anthropic has accidentally exposed a massive portion of the internal source code of its widely used AI-powered coding assistant, Claude Code, during a recent software release.

Nearly 1,900 files and 500,000 lines of code became temporarily accessible online, offering an unprecedented glimpse into the inner workings of the tool that has become a staple for developers worldwide.

The company confirmed the incident occurred due to human error in release packaging, not a cyberattack. A spokesperson emphasized that no sensitive customer data or credentials were exposed, underscoring that this was a release mishap rather than a security breach.

The leak surfaced when version 2.1.88 of Claude Code was briefly published with a large .map debugging file, which contained embedded source data. This flaw allowed anyone with access to reconstruct significant parts of the codebase.

The exposed files quickly gained attention across platforms like GitHub and X, where a post linking to the code amassed millions of views within hours. Developer Chaofan Shou was among the first to spot the leak, triggering a rapid chain of dissemination and analysis.

In response, Anthropic has begun issuing DMCA takedown notices to curb further distribution of the source files. While the leak does not include the core AI model architecture, it provides valuable insights into Claude Code’s features, system design, and upcoming functionalities, including a “Proactive Mode” for continuous coding and a “Dream Mode” for background idea processing, neither of which had been officially announced prior to the incident.

This mishap comes at a critical time for Anthropic, following its split from the Pentagon earlier this year, which had already put the startup in the spotlight.

Claude Code has surged in popularity since its release in May, helping developers build features, fix bugs, and automate coding tasks, with run-rate revenue reportedly exceeding $2.5 billion as of February.

The company also acknowledged a bug affecting usage limits in Claude Code, which remains under active investigation. Users reported reaching their quotas far sooner than expected, further adding to operational challenges.

Founded in 2021 by former OpenAI executives, Anthropic has quickly become a key player in the AI space. The accidental leak of Claude Code highlights the challenges of balancing rapid innovation with rigorous operational security, especially as rivals like OpenAI, Google, and xAI intensify competition in AI-assisted software development.

An Anthropic spokesperson said: “We are rolling out measures to prevent this from happening again. While no customer data was compromised, we take full responsibility for this release error and remain committed to the security and integrity of our products.”