Pentagon Picks OpenAI Over Anthropic for Classified AI Deployment

The U.S. Department of Defense has finalized an agreement to deploy OpenAI's artificial intelligence models on its classified network. OpenAI CEO Sam Altman confirmed the deal, emphasizing the Pentagon's respect for safety and shared goals. The arrangement includes technical safeguards and follows OpenAI's core principles prohibiting domestic mass surveillance and ensuring human responsibility for the use of force. This decision comes as the Pentagon distances itself from Anthropic due to disagreements over limits on autonomous weapons and surveillance.

Key Points: Pentagon Deploys OpenAI Models on Classified Network

  • Pentagon chooses OpenAI for classified network
  • Disagreement with Anthropic on military AI use
  • OpenAI stresses safety, bans mass surveillance
  • Humans must remain responsible for use of force
3 min read

Pentagon shuns Anthropic, picks OpenAI models in its classified network

The US Department of Defense selects OpenAI's AI models for its classified systems, citing shared safety principles, while distancing from Anthropic.

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. - Sam Altman"

New Delhi, Feb 28

The United States Department of Defense has decided to deploy OpenAI's artificial intelligence models on its classified network, even as it distances itself from Anthropic over disagreements on AI safety and military use, OpenAI chief Sam Altman said on Saturday.

Altman confirmed the development, saying the company has reached an agreement with the Pentagon to move forward with the deployment.

In a post on X, Altman said OpenAI's discussions with the Department of Defense showed "deep respect for safety" and a shared goal of achieving the best possible outcome.

Referring to the department as the "Department of War" (DoW), he added that OpenAI remains committed to serving humanity, while acknowledging that the world is "complicated, messy, and sometimes dangerous."

"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome," Altman stated.

Altman said OpenAI continues to prioritise AI safety and the wide distribution of benefits. He stressed that two of the company's core safety principles are a ban on domestic mass surveillance and ensuring that humans remain responsible for the use of force, including in autonomous weapon systems.

"AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," he added.

According to him, these principles have not been compromised in the deal with the Pentagon. He said the Department of Defense agrees with these principles and reflects them in its laws and policies, and that they are included in the final agreement.

"The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman mentioned.

"We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place," he stated.

As part of the arrangement, OpenAI will build technical safeguards to ensure its models behave as intended.

The company will also deploy field deployment engineers to support the models and ensure their safe use. Altman added that the models will be deployed only on secure cloud networks.

The Pentagon's decision comes amid a public clash with Anthropic, the maker of the Claude AI model.

According to reports, the Defense Department had pushed for full military use of AI tools for all lawful purposes, including in sensitive areas such as weapons development, intelligence gathering and battlefield operations.

Anthropic had reportedly insisted on limits, particularly around fully autonomous weapons and mass surveillance of Americans.

- IANS

Share this article:

Reader Comments

P
Priya S
The "human responsibility for use of force" principle is crucial. But once you give a powerful tool to the military, can you really control how it's used? OpenAI's assurances sound good, but the track record of such partnerships is not always clean.
R
Rohit P
Interesting that Anthropic stood its ground on ethical limits. Respect for that. Meanwhile, OpenAI calling it the "Department of War" is quite telling, no? The branding is "Defense" but the intent is clear. This is a wake-up call for our own DRDO and defense tech.
S
Sarah B
As someone working in tech, the technical safeguards and field deployment engineers part is key. It's not just about selling the model. But the real test will be during an actual crisis. Will the AI's behavior be predictable then?
V
Vikram M
The world is indeed complicated and messy. While we debate ethics, our neighbors are not sleeping. India must have a clear, pragmatic AI-for-defense policy. Self-reliance in this field is non-negotiable for a nation of our size and challenges.
N
Nisha Z
A bit concerning. The article says Pentagon wanted use in "weapons development" and "battlefield operations". OpenAI says no autonomous weapons, but where is the line? If a model helps design a more lethal weapon, is that okay? The principles seem open to interpretation.

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50