Anthropic, the Artificial Intelligence company, announced on Monday that it identified what it described as “industrial-scale campaigns” by three Chinese AI firms to improperly copy its Claude model. The company stated that DeepSeek, Moonshot AI, and MiniMax used roughly 24,000 fraudulent accounts to generate over 16 million queries with its large language model, violating its terms of service.
Details of the Alleged Model Distillation
According to Anthropic, the activity constituted a “distillation attack,” a technique where one model’s outputs are used extensively to train or improve another. The campaigns were reportedly designed to extract Claude’s capabilities to enhance the competing firms’ own AI models. The scale of the operation, involving millions of exchanges, suggests a coordinated effort to replicate advanced AI functionalities.
The company detected the activity through monitoring systems designed to identify misuse of its application programming interfaces (APIs). Anthropic’s terms of service explicitly prohibit using its services to develop competing models or for any form of model distillation without explicit permission.
Background on the Companies Involved
Anthropic is a prominent U.S.-based AI safety and research company known for developing the Claude family of AI assistants. The three named Chinese firms, DeepSeek, Moonshot AI, and MiniMax, are all significant players in China’s rapidly growing artificial intelligence sector. These companies have developed their own competing large language models and are engaged in a highly competitive global AI race.
Incidents involving alleged intellectual property appropriation in technology sectors, particularly between U.S. and Chinese firms, have been a recurring point of tension. The AI industry, where model training data and architecture are closely guarded assets, is especially sensitive to such allegations.
Potential Implications and Industry Context
This allegation highlights the intense competition and security concerns within the global artificial intelligence landscape. As AI models become increasingly valuable, protecting their underlying technology and training methodologies has become a top priority for developers. Model distillation, while a known technique in machine learning research, becomes contentious when conducted at scale without authorization.
The reported scale of 16 million queries represents a significant computational and data extraction effort. Such volume could potentially be used to create detailed training datasets aimed at mimicking Claude’s performance, reasoning style, and knowledge base.
Next Steps and Official Response
Anthropic has stated it terminated the fraudulent accounts involved in the alleged campaigns. The company is likely to review its API security and monitoring protocols to prevent similar incidents. Industry observers will be watching for any formal responses or statements from DeepSeek, Moonshot AI, and MiniMax regarding the specific allegations.
Legal or regulatory repercussions may depend on the specific jurisdictions involved and the details of the terms of service violations. The incident may also influence ongoing policy discussions about AI Ethics, intellectual property protection for AI models, and international collaboration norms in artificial intelligence development.
Source: Various reports