Connect with us
distillation attacks

Tech News

Anthropic Accuses Chinese AI Firms of Large-Scale Data Theft

Anthropic Accuses Chinese AI Firms of Large-Scale Data Theft

The Artificial Intelligence company Anthropic has publicly accused three Chinese AI firms of conducting industrial-scale campaigns to steal its proprietary technology. The alleged operation involved the creation of approximately 24,000 fraudulent accounts to mask what Anthropic describes as “distillation attacks” aimed at illicitly extracting its AI models.

Details of the Alleged Campaign

In a detailed blog post, Anthropic outlined the methods used in the alleged attacks. The company named DeepSeek, a prominent Chinese AI developer, among the firms involved. According to Anthropic, these entities systematically created tens of thousands of fake user accounts to gain access to its AI systems. The primary goal, as stated by Anthropic, was to perform model distillation, a technique where a smaller, less capable model is trained to mimic the outputs of a larger, more sophisticated one, effectively copying its functionality.

Anthropic’s security team detected patterns of activity consistent with automated attempts to query its AI models at high volume. This activity was designed to harvest enough input-output data to replicate the core capabilities of Anthropic’s systems. The use of a massive network of fraudulent accounts was intended to circumvent standard rate limits and detection mechanisms that protect against such data extraction.

What is a Distillation Attack?

A distillation attack, or model extraction attack, is a security threat specific to machine learning. In this scenario, an adversary repeatedly queries a publicly available AI model, such as an API. By collecting a vast dataset of the model’s responses to various prompts, the attacker can then train their own, separate model to produce similar results. This process can potentially replicate valuable Intellectual Property without direct access to the underlying code or architecture.

For AI companies, their trained models represent a significant investment in computational resources, data, and research. Unauthorized extraction undermines their competitive advantage and business model. Anthropic’s allegations suggest these attacks were not isolated incidents but part of a coordinated, large-scale effort.

Industry Context and Reactions

The allegations emerge amid intense global competition in artificial intelligence development. Tensions over technology transfer and intellectual property protection have been a persistent issue between the United States and China in the tech sector. Anthropic’s public disclosure is a rare, detailed account of alleged corporate espionage in the AI domain.

As of now, the named Chinese AI firms, including DeepSeek, have not issued public statements in response to the specific allegations made by Anthropic. The broader industry has long been aware of the theoretical risk of model extraction, but Anthropic’s report provides a concrete case study of its execution on a massive scale.

Security researchers note that defending against such attacks is challenging. It requires distinguishing between legitimate high-volume usage by real customers and malicious, automated queries designed for data harvesting. Companies often rely on a combination of rate limiting, behavior analysis, and monitoring for suspicious patterns.

Potential Implications and Next Steps

Anthropic stated it has taken steps to terminate the fraudulent accounts and strengthen its defenses against similar future campaigns. The company indicated it is implementing more advanced detection systems to identify and block activity consistent with distillation attempts.

The public accusation may lead to increased scrutiny of data security practices across the AI industry. Other companies providing access to powerful models via APIs may reassess their own vulnerability to similar extraction attacks. This incident highlights a fundamental tension in the business of AI: balancing open access for developers and users with the need to protect core intellectual assets.

Formal legal or regulatory action may follow, though none has been announced. The case could influence ongoing policy discussions regarding international norms for AI development and the protection of digital intellectual property. Industry observers expect AI firms worldwide to bolster their technical and legal safeguards as model extraction moves from a theoretical threat to a demonstrated commercial risk.

Source: Anthropic

More in Tech News