![Anthropic]()
Anthropic, the AI research company behind the Claude language model, has published a detailed security briefing revealing what it describes as “industrial-scale distillation attacks” targeting its flagship AI systems — raising concerns about intellectual property theft, national security, and the future of guarding foundational AI models.
According to Anthropic, three competitors — DeepSeek, Moonshot AI, and MiniMax — orchestrated large-scale campaigns to siphon capabilities out of Claude by repeatedly querying it through fraudulent accounts and proxy networks. The goal: extract high-value model knowledge through a practice called distillation and use that extracted information to train competing AI systems.
What Anthropic Is Calling a “Distillation Attack”
In traditional machine learning practice, distillation refers to a method where a smaller “student” model is trained to mimic a larger “teacher” model — often to reduce computational cost while preserving performance.
However, Anthropic argues that when done illegitimately at scale against a proprietary system, this becomes a form of capability extraction — potentially allowing rivals to shortcut years of research and billions in compute by ingesting Claude’s outputs en masse without permission.
The company says it identified a coordinated effort involving more than 24,000 fraudulent accounts, generating over 16 million interactions with Claude. These sessions allegedly targeted Claude’s differentiated reasoning, coding, and tool-use capabilities — capabilities that took years and extensive safety work to build.
Anthropic has described these actions as a violation of its terms of service and regional access controls — Claude is not commercially available in China — and says the behavior was masked by proxy networks to evade simple detection.
Broader Implications for AI Security and Competitiveness
Anthropic frames the incident as more than a simple commercial dispute. In a statement accompanying its blog post, the firm warned that illicitly distilled models lack safety guardrails and could be repurposed in dangerous ways if open-sourced — from bioweapon design to cyberattacks and mass surveillance technologies.
The company is urging coordinated responses from AI companies, cloud providers, and policymakers to build defensive systems that can detect and stop these campaigns before they scale further. Part of the strategy includes sharing technical indicators of abuse and building behavioral fingerprinting systems to distinguish legitimate use from extraction attempts.
Industry observers say the dispute highlights a new front in the AI arms race: protecting intellectual property and safety-critical capabilities as models become commercial and strategic assets. Safeguarding these capabilities may require not just technical defenses, but also legal frameworks and export controls on advanced AI infrastructure — something Anthropic has explicitly called for in recent weeks.
Accusations Against Chinese AI Labs
While Anthropic’s security blog does not name specific companies, multiple news outlets are reporting that the targets of the distillation campaigns include Chinese AI labs DeepSeek, Moonshot AI, and MiniMax. These firms allegedly used proxy networks and tens of thousands of fake accounts to access Claude at scale, potentially to train or bootstrap their own models using Claude’s outputs.
The accusations have sparked reactions across the AI press landscape, with coverage from Reuters, Business Standard, Business Insider, and other outlets, situating the controversy against the backdrop of US-China technology competition and global AI governance debates.
How Anthropic Is Responding
To counter what it describes as distillation abuse, Anthropic says it is implementing a suite of technical defenses — including detection systems, behavioral fingerprinting, and tightened verification for educational and research access — designed to make extraction attacks harder to launch and easier to detect.
The company is also sharing indicators with other AI labs and infrastructure providers to help the broader ecosystem improve its ability to spot and thwart similar campaigns.
Anthropic’s broader stance highlights a growing industry concern: as models become more powerful and commercially valuable, protecting them from misuse or theft — whether for competitive advantage or malicious activity — is becoming a strategic priority.
As the dust settles around this announcement, the AI industry is likely to see heightened scrutiny on how models are accessed, how training data is sourced, and how capabilities can be protected without stifling innovation.
Anthropic’s call for coordinated defense strategies, policy intervention, and shared detection tools may mark a turning point in how companies safeguard proprietary AI technology — and how global AI competition is regulated going forward.