Anthropic AI Defies US War Dept Over AI Ethics, Calls Supply-Chain Risk Label "Legally Unsound"

Anthropic AI has publicly condemned the U.S. Department of War's decision to designate it a supply-chain risk, calling the move legally unsound and dangerous. The conflict centers on Anthropic's refusal to allow its Claude AI model to be used for mass domestic surveillance and fully autonomous weapons systems. Secretary of War Pete Hegseth accused the company of arrogance and betrayal, ordering a transition away from its technology. Anthropic maintains its ethical stance is non-negotiable, despite government pressure and the new restrictions.

Key Points: Anthropic AI Clashes with US Over AI Use for Surveillance, Weapons

  • AI firm defies government over ethical use
  • Labeled a national security supply-chain risk
  • Refuses AI for mass surveillance & autonomous weapons
  • Calls government action legally unprecedented
  • Accuses Secretary of overstepping authority
4 min read

Dario Amodei-led Anthropic AI calls US Dept of War designating it a supply-chain risk "legally unsound"

Anthropic AI challenges the US Department of War's supply-chain risk designation, refusing to allow its AI for mass domestic surveillance or autonomous weapons.

"No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. - Anthropic AI"

Washington DC, February 28

The face-off between Dario Amodei-led Anthropic AI and the US administration intensified on Friday. First, it was Secretary of War Pete Hegseth launching into a diatribe against Anthropic, accusing them of engaging in duplicity, then it was Anthropic's turn. Amodei's company, in its statement, called the US administration's decision to label it a supply chain risk legally unsound.

"Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk. This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons. We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above. To the best of our knowledge, these exceptions have not affected a single government mission to date," Anthropic said in its statement.

"We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights. Designating Anthropic as a supply chain risk would be an unprecedented action--one historically reserved for US adversaries, never before publicly applied to an American company...We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government. No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons."

The company further alleged that Hegseth does not have the authority to designate Anthropic a supply-chain risk.

"Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts--it cannot affect how contractors use Claude to serve other customers," it said.

Earlier, after President Trump ordered all federal agencies to stop using Anthropic AI, Hegseth directed the Department of War to designate Anthropic.

"This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every Lawful purpose in defense of the Republic. Instead, Anthropic AI and its CEO Dario Amodei have chosen duplicity. Cloaked in the sanctimonious rhetoric of "effective altruism," they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives,," he posted on X.

"In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," he added.

Earlier, Amodei, in a statement on Thursday, said that the firm would not support certain uses of AI, including mass domestic surveillance and fully autonomous weapons, citing concerns about democratic values and the current reliability of frontier AI systems.

Amodei stated that despite pressure from the DOW to agree to "any lawful use" of its technology and remove specific safeguards, the company would not change its position.

- ANI

Share this article:

Reader Comments

P
Priya S
Wow, calling the US "Department of War" is quite a throwback! 😅 On a serious note, the core issue is critical. Fully autonomous weapons are a terrifying prospect. As a tech professional in Bengaluru, I appreciate a company drawing ethical lines. National security is important, but not at the cost of creating Skynet.
R
Rohit P
While I respect Anthropic's stance, calling the government's move "legally unsound" in a public spat might backfire. In a country like India, we've seen how public fights with the establishment can go. Negotiate privately, build consensus. This public drama helps no one and puts their entire business at risk. A bit naive, I feel.
S
Sarah B
The language from Secretary Hegseth is incredibly aggressive – "arrogance, betrayal, duplicity." It reads more like a personal feud than policy. This kind of polarization, where you're either 100% with the government or an enemy, is dangerous for any democracy, be it the US or India. Chilling precedent.
V
Vikram M
From an Indian strategic perspective, this is a case study. The US is weaponizing its economic power against its own companies. Imagine if our government did this to TCS or Infosys for not complying on a contentious issue. It shows the immense pressure tech firms face globally, regardless of where they are headquartered.
N
Nisha Z
Good for Anthropic! Standing against mass surveillance is the right thing to do. We in India should support such ethical positions in tech. After all, if a powerful company in America can be bullied into creating surveillance tools, what's stopping those

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50