TLDR
- San Francisco-based Anthropic has leveled serious allegations against three Chinese artificial intelligence companies — DeepSeek, Moonshot, and MiniMax — claiming they orchestrated systematic “distillation attacks” targeting its Claude AI assistant.
- The companies allegedly established 24,000 fraudulent user accounts and initiated more than 16 million interactions with Claude to replicate its capabilities in their own systems.
- Commercial proxy networks were reportedly utilized to circumvent Anthropic’s geographic restrictions preventing Claude usage within China.
- Among the accused, MiniMax reportedly generated the highest activity volume, responsible for approximately 13 million of the total 16 million exchanges.
- Anthropic is urging collaborative action from the artificial intelligence sector, infrastructure providers, and regulatory authorities to address this threat.
Anthropic has publicly identified three Chinese artificial intelligence enterprises as perpetrators of systematic efforts to replicate its Claude AI model’s intelligence through a practice known as “distillation.”
The companies under scrutiny are DeepSeek, Moonshot AI, and MiniMax. According to Anthropic’s findings, these operations spanned approximately 24,000 illegitimate accounts and resulted in more than 16 million individual interactions with Claude.
Distillation represents a machine learning technique where a less sophisticated AI system gains knowledge by analyzing responses from a more capable model. While Anthropic acknowledges this methodology as acceptable for internal development purposes, the company maintains it becomes problematic when rivals exploit it for competitive advantage.
“Distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost,” Anthropic wrote in its blog post on Sunday.
Anthropic’s user agreement explicitly prohibits commercial Claude deployment within China. To circumvent these limitations, the three organizations reportedly leveraged commercial proxy infrastructure to simultaneously operate thousands of Claude accounts across distributed networks.
Following account establishment, the companies transmitted substantial quantities of strategically designed prompts intended to extract specific functionalities from Claude. These captured outputs were subsequently utilized for training proprietary models or serving as datasets for reinforcement learning protocols.
The intrusion efforts specifically targeted Claude’s most sophisticated capabilities, encompassing autonomous reasoning, tool integration, software development, analytical processing, and visual interpretation.
MiniMax Led the Traffic
Among the three organizations, MiniMax demonstrated the most aggressive extraction activity. Anthropic reports that MiniMax independently generated over 13 million interactions from the 16 million total documented.
DeepSeek, Moonshot AI, and MiniMax all maintain headquarters in China and collectively represent billions in market valuation. None of the organizations provided responses to media inquiries seeking their perspective.
Anthropic stated it confirmed the companies’ involvement through IP address pattern analysis, request signature examination, infrastructure fingerprinting, and intelligence shared by industry collaborators who observed identical actors targeting alternative platforms.
Anthropic Is Not Alone
Anthropic represents the latest among American AI developers to highlight this concern. OpenAI submitted correspondence to United States legislators earlier this month, asserting it had detected patterns “indicative of ongoing attempts by DeepSeek to distill frontier models.”
OpenAI initially raised distillation warnings in early 2024, following DeepSeek’s inaugural model release, which observers noted bore striking similarities to ChatGPT.
Anthropic indicates it will counter these threats by enhancing detection capabilities, reinforcing access authentication protocols, and distributing threat intelligence across industry partners.
The company is also calling for a coordinated response from the wider AI industry, cloud providers, and policymakers. “No company can solve this alone,” Anthropic wrote.
Industry experts consulted by CNBC observed that distinguishing between acceptable and unacceptable distillation practices involves complexity, emphasizing the importance of careful evaluation when assessing such allegations.
Anthropic’s published statement confirmed that MiniMax generated over 13 million exchanges, representing the largest individual contribution among the firms implicated.


