Anthropic Accuses Chinese AI Firms of "Industrial-Scale" Model Theft

Anthropic AI has publicly accused three Chinese AI companies—DeepSeek, Moonshot AI, and MiniMax—of conducting "industrial-scale distillation attacks" on its Claude models. The company claims the firms created tens of thousands of fraudulent accounts to generate millions of interactions, illicitly extracting Claude's capabilities to train their own systems. Anthropic warned that such actions could allow foreign labs to remove AI safeguards and funnel advanced capabilities into military or surveillance applications. The accused Chinese firms have not yet issued an official response to these serious allegations.

Key Points: Anthropic Claims Chinese AI Firms Stole Claude's Capabilities

  • 24,000 fraudulent accounts created
  • Over 16 million exchanges with Claude
  • Extracted capabilities to train rival models
  • Raises national security concerns over safeguards
  • No laws currently govern such activities
2 min read

Anthropic AI claims that it has identified "industrial-scale distillation attacks" by Chinese AI company DeepSeek

Anthropic AI alleges DeepSeek, Moonshot AI, and MiniMax ran "industrial-scale distillation attacks" on Claude, extracting its capabilities via fraudulent accounts.

"We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. - Anthropic AI"

Washington DC, February 24

Anthropic AI claims that it has identified "industrial-scale distillation attacks" on its own reasoning models by DeepSeek, Moonshot AI and MiniMax.

In a post on X, Anthropic shared, "We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models."

"Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems," it wrote further.

"These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community," the statement concluded.

Anthropic is an artificial intelligence research company founded in 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. The company focuses on building reliable, interpretable, and steerable AI systems with a strong emphasis on safety. Anthropic is best known for developing the Claude family of large language models, designed to be helpful, honest, and harmless. Its research centers on "constitutional AI," a method that guides models using explicit principles rather than relying solely on human feedback. Backed by major investors such as Google and Amazon, Anthropic aims to advance AI technology responsibly while minimizing societal and ethical risks.

DeepSeek is a Chinese artificial intelligence company known for developing powerful open-weight large language models (LLMs) and the DeepSeek chatbot. Founded in 2023, DeepSeek's models, including DeepSeek-R1 and DeepSeek-V3, use efficient architectures like mixture-of-experts to deliver high performance at much lower computational cost compared with many Western AI systems. DeepSeek's chatbot became one of the most downloaded AI apps in early 2025 and is praised for strong reasoning, long-context understanding, and multilingual capabilities. However, it has also attracted regulatory scrutiny and national security concerns in several countries due to data privacy and training practices.

There has been no official statement released by DeepSeek AI, Moonshot AI and Minimax countering Anthropic's claims.

However, Anthropic's eyebrow-raising claims do bring the Chinese AI company vehemently under public scrutiny.

Currently, no laws in either of the countries govern such activities.

- ANI

Share this article:

Reader Comments

P
Priya S
The scale is shocking - 24,000 fake accounts and 16 million exchanges! 😳 It feels like industrial espionage in the digital age. While competition drives innovation, this crosses a line. Hope this leads to proper international regulations soon.
R
Rohit P
Interesting timing for this claim. The US and China are in a full-blown tech cold war, and AI is the main battlefield. We in India need to be very careful about which technologies we adopt and from where. Data sovereignty is non-negotiable.
S
Sarah B
As someone working in tech, the technical feat here is immense, but the ethics are deeply troubling. "Removing safeguards" to feed models into military systems is a terrifying prospect. The global AI community needs to come together on this, fast.
V
Vikram M
Let's wait for DeepSeek's response. Right now it's one side of the story. The article says there are no laws governing this. In that legal vacuum, companies will push boundaries. India should take a lead in proposing global AI governance rules at forums like the G20.
K
Karthik V
Respectfully, Anthropic's statement feels a bit alarmist and plays into the "foreign threat" narrative. Distillation is a common technique. The mention of "military and surveillance" feels like fear-mongering without presenting concrete evidence. A more balanced report would be better.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50