Anthropic Blocks Chinese Communist Party Firms from AI Access, Forfeits Revenue

Anthropic has terminated access to its Claude AI model for entities connected to the Chinese Communist Party, including some designated as Chinese Military Companies. The company stated this decision meant forgoing several hundred million dollars in revenue to protect America's technological advantage. Anthropic is also resisting pressure from the Department of War to remove specific AI safeguards, such as those against mass domestic surveillance. The firm continues to support US national security applications while maintaining its self-imposed ethical guardrails.

Key Points: Anthropic Cuts Off CCP-Linked Firms from Claude AI

  • Blocked CCP-linked firms from Claude AI
  • Forfeited hundreds of millions in revenue
  • Shut down CCP-sponsored cyberattacks
  • Advocated for strong chip export controls
3 min read

Anthropic cuts off Claude access to China's Communist Party-linked firms

US AI firm Anthropic blocks CCP-linked entities from using Claude, forgoing hundreds of millions in revenue to safeguard US tech lead and national security.

"We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party - Dario Amodei"

Washington DC, February 27

US-based Artificial Intelligence company Anthropic has said it cut had off access to its AI model- Claude for firms linked to the Chinese Communist Party, forgoing several hundred million dollars in revenue as part of efforts to safeguard America's technological lead.

In a statement issued on Thursday, Anthropic CEO Dario Amodei said the company chose to block Claude's usage by entities connected to the CCP, including some designated by the US Department of War as Chinese Military Companies and that the company shut down CCP-sponsored cyberattacks that attempted to misuse the AI system.

Amodei further stated that the AI firm has advocated for strong export controls on advanced chips to help maintain a democratic advantage in artificial intelligence.

"Anthropic has also acted to defend America's lead in AI, even when it is against the company's short-term interest. We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party (some of whom have been designated by the Department of War as Chinese Military Companies), shut down CCP-sponsored cyberattacks that attempted to abuse Claude, and have advocated for strong export controls on chips to ensure a democratic advantage," the statement from Atnthropic read.

The company said it has worked proactively with the US Department of War and the intelligence community, deploying its models within classified government networks and at National Laboratories.

According to the statement, Claude is being used across national security agencies for applications such as intelligence analysis, modelling and simulation, operational planning, and cyber operations.

However, Anthropic reiterated that it would not support certain uses of AI, including mass domestic surveillance and fully autonomous weapons, citing concerns about democratic values and the current reliability of frontier AI systems.

Amodei further stated that despite pressure from the Department of War to agree to "any lawful use" of its technology and remove specific safeguards, the company would not change its position.

"The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk"--a label reserved for US adversaries, never before applied to an American company--and to invoke the Defense Production Act to force the safeguards' removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," the statement read.

"Regardless, these threats do not change our position: we cannot in good conscience accede to their request," it added.

Anthropic further noted that it remains ready to continue supporting US national security efforts while maintaining what it described as necessary guardrails on the deployment of its AI systems.

- ANI

Share this article:

Reader Comments

P
Priya S
Interesting to see a private company taking such a strong ethical stand, even against its own government's pressure. The part about refusing to support mass surveillance and autonomous weapons is commendable. Hope Indian tech firms are watching and learning about responsible AI development.
R
Rohit P
The tech cold war is heating up. While I understand the security concerns, completely cutting off access might just push China to double down on its own AI, making them completely self-reliant in the long run. A more nuanced approach might have been better.
S
Sarah B
From an Indian perspective, this is a crucial lesson. We cannot afford to be dependent on foreign AI for critical national security functions. The work with DRDO and our own labs on AI for defence must be accelerated and properly funded. Jai Hind!
V
Vikram M
Several hundred million dollars is a huge amount to walk away from. Shows they are serious. But the internal tussle with the US Department of War is worrying. Should a company's ethics be overruled by the state? A complex debate.
K
Karthik V
Respectfully, while the intent to protect tech is clear, calling it a "democratic advantage" feels a bit like propaganda. Many countries, including India, need to navigate relationships with both the US and China. This binary "us vs them" framing isn't helpful for the Global South. We need technology partnerships, not new digital iron curtains.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50