Anthropic Detects Industrial-Scale Distillation Attacks on Claude
Action Required
The unauthorized extraction of Claude's capabilities by competitors creates a significant security vulnerability and undermines Anthropic's efforts to maintain a secure and reliable AI platform.
AI Impact Summary
Anthropic is proactively addressing a significant security threat: industrial-scale distillation attacks by AI labs seeking to illicitly extract capabilities from Claude. Three labs (DeepSeek, Moonshot, and MiniMax) have been identified engaging in these attacks, generating millions of exchanges to train less capable models. This poses a national security risk due to the potential for unprotected, dangerous capabilities to proliferate, particularly through open-sourced distilled models.
Affected Systems
- Date
- 23 Feb 2026
- Change type
- capability
- Severity
- critical