Stocks

Anthropic Refuses Pentagon Order to Strip Claude AI Safety Features

Pinterest LinkedIn Tumblr

TLDR

  • Pentagon pressures Anthropic to strip Claude AI of safety limitations for unrestricted defense use, including autonomous weapons systems and mass surveillance.
  • Dario Amodei, Anthropic’s CEO, refuses the demand, warning of threats to democratic values.
  • Defense officials set Friday 5pm deadline for compliance or removal from military contracts.
  • Pentagon threatens Defense Production Act invocation and supply chain security designation.
  • Contract modifications submitted Wednesday evening rejected by Anthropic as insufficient.

Anthropicโ€™s CEO Dario Amodei continues to resist Pentagon demands for removing safety protocols from the Claude artificial intelligence system, despite jeopardizing a lucrative government contract. Defense officials have set a Friday deadline, demanding the company agree to allow “any lawful use” of its technology.

The dispute centers on two specific use cases: utilizing Claude for mass domestic surveillance programs and powering fully autonomous weapons systems. Anthropic maintains that these applications were never part of their original Pentagon agreements and represent an unacceptable expansion of scope.

Amodei met with Defense Secretary Pete Hegseth earlier this week. The discussions failed to produce an agreement, leading the Pentagon to deliver revised contract language Wednesday evening.

Anthropic rejected the modifications. A company spokesperson said they showed “virtually no progress” and included language allowing safety restrictions to “be disregarded at will.”

Defense officials have issued stark warnings. They’ve threatened to exclude Anthropic from all defense contracts and label the company a “supply chain risk” — a designation typically reserved for entities from hostile nations.

A senior Pentagon official also told Reuters that Secretary Hegseth is considering invoking the Defense Production Act. This law allows the government to force private companies to support national security projects, with or without their consent. Constitutional experts have questioned whether such use of the law would be legally valid.

What Anthropic Says About AI Weapons and Surveillance

In a public statement, Amodei contended that todayโ€™s AI systems are “simply not reliable enough to power fully autonomous weapons.” He stressed that deploying them without human oversight puts both soldiers and civilians at risk.

On surveillance, he warned that AI can “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale.”

Anthropic said it supports AI use for legitimate foreign intelligence gathering, but draws the line at domestic surveillance operations.

Pentagon officials pushed back on these objections, with Undersecretary Emil Michael stating that the uses concerning Anthropic are already banned under current laws and military policies. Michael confronted Amodei on X, accusing him of wanting “nothing more than to try to personally control the US Military.”

The Business Risk for Anthropic

The financial stakes are considerable. During the past year, the Pentagon has signed $200 million in framework contracts with major AI firms including Anthropic, OpenAI, and Google.

If Anthropic receives a supply chain risk designation, defense contractors like Lockheed Martin would be barred from using Anthropic’s products on Department of Defense projects. The defense contractor network includes roughly 60,000 businesses.

Amodei said Anthropic offered to work with defense officials on research initiatives to improve AI reliability for military purposes, but the offer was rejected.

As of Thursday night, neither side has budged, with the 5:01 p.m. Friday deadline still in place.