Pentagon Threatens to Cut Ties with Anthropic Over AI Military Use Limits

The Pentagon is reportedly considering ending its relationship with AI company Anthropic due to the firm's insistence on maintaining limitations for military use of its AI models. Anthropic's Claude AI was used in a U.S. military operation targeting former Venezuelan President Nicolas Maduro through a partnership with Palantir. The military is pushing Anthropic and other AI companies, including OpenAI, Google, and xAI, to allow their tools to be used for "all lawful purposes" without restrictions. Meanwhile, fears of AI automation replacing software business functions have triggered a significant sell-off in tech stocks, dubbed the "SaaSpocalypse."

Key Points: Pentagon May End Anthropic AI Partnership Over Use Limits

  • Pentagon may end Anthropic AI partnership
  • Claude used in operation against Nicolas Maduro
  • Contract valued up to $200 million
  • Military pushes for "all lawful purposes" use
  • AI automation fears trigger "SaaSpocalypse"
2 min read

Anthropic in eye of storm as Pentagon threatens to stop using its AI models: Report

The Pentagon considers severing ties with Anthropic over restrictions on military AI use, impacting Claude's role in operations and national security.

"Everything's on the table, including dialling back the partnership with Anthropic or severing it entirely - Administration official"

New Delhi, Feb 16

US-based AI company Anthropic is in the middle of a deeper controversy as the Pentagon is reportedly considering to snap its ties with Dario Amodei-run firm over its insistence on "maintaining some limitations" on how the US military uses its AI models.

Anthropic's AI model Claude was reportedly used in the US military's operation to capture ‌former Venezuelan President Nicolas Maduro (through its partnership with AI software firm Palantir), as per a Wall Street Journal report.

Now, the Pentagon ‌is reportedly "considering ending its relationship with artificial ​intelligence company Anthropic", reports Axios, citing ‌an administration official.

"Everything's on the table, including dialling back the partnership with Anthropic or severing it entirely," the official was quoted as saying.

An Anthropic spokesperson said it remains "committed to using frontier AI in support of U.S. national security."

However, it would be difficult for the US military to immediately replace Claude, because "the other model companies are just behind".

Anthropic signed a contract valued up to $200 million with the Pentagon last year.

According to reports, the Pentagon is pushing four AI companies to let the military ‌use their tools for "all lawful purposes". The other companies are OpenAI, Google and Musk's xAI.

Claude is a next generation AI assistant built by Anthropic and trained to be safe, accurate, and secure.

Meanwhile, concerns over software stocks globally impacted Indian IT stocks as Anthropic expanded its enterprise AI assistant with a new automation layer designed to handle complete business workflows. Investors caution of artificial intelligence replacing significant portions of the software business resulted in a massive sell-off now known as the "SaaSpocalypse."

The new AI assistant could automate legal document reviews, compliance checks, sales planning, marketing campaign analysis, financial reconciliation, data visualisation, SQL‑based reporting and enterprise‑wide document search, the reports said.

- IANS

Share this article:

Reader Comments

R
Rohit P
The part about Indian IT stocks getting hit is worrying. Our tech sector is already facing headwinds. If this "SaaSpocalypse" talk spreads, it could impact jobs here. Hope our companies are preparing.
D
David E
$200 million contract on the line! The Pentagon's demand for "all lawful purposes" is vague. What's lawful for them might not be ethical. Good on Anthropic for pushing back, even if it costs them.
A
Aditya G
The automation capabilities listed at the end are insane. Legal docs, compliance, financial work... This is the real story for India. We need to skill up fast, or a lot of BPO and IT services work could vanish. 😟
S
Sarah B
While I respect the ethical stance, the spokesperson's statement feels contradictory. "Committed to using AI for national security" but putting limits? They need clearer, more transparent principles.
K
Karthik V
Used to capture Maduro? That's a direct military action. No wonder they want limits. This tech is becoming a strategic asset. India should be developing its own sovereign AI capabilities for defence, not just relying on foreign tech.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50