Major Developments
- The Pentagon ordered an immediate shutdown of Anthropic’s AI systems throughout federal agencies, designating the company a supply-chain security risk.
- OpenAI rapidly secured a military contract to deploy its AI technology in classified Pentagon systems following Anthropic’s abrupt termination.
- A potential $200 million defense contract with Anthropic collapsed after the company refused to authorize its AI for autonomous weapons and mass surveillance applications.
- Despite OpenAI asserting its contract includes the same restrictions Anthropic wanted, questions remain about enforcement and compliance.
- Anthropic is preparing litigation challenging the supply-chain designation, calling it legally baseless.
The U.S. government terminated its collaboration with Anthropic on Friday, designating the artificial intelligence company as a supply-chain security risk. Hours later, rival firm OpenAI announced a new deal to deploy its AI systems within Pentagon classified networks.
President Donald Trump issued an executive directive requiring all federal agencies to immediately discontinue use of Anthropic’s products. Agencies currently running the company’s Claude AI platform received a six-month deadline to transition to approved alternatives.
Defense Secretary Pete Hegseth announced on X that Anthropic poses a “Supply-Chain Risk to National Security.” Such designations are generally reserved for companies tied to adversarial nations like China.
The ramifications extend well beyond direct government partnerships. Defense contractors and Pentagon partners may need to verify complete removal of Claude from their technology stacks. Tech giants including Nvidia, Amazon, and Google maintain significant investment stakes and partnerships with Anthropic.
Anthropic previously broke ground as the first AI company to deploy its models on Pentagon secure computing infrastructure. That July partnership was valued at up to $200 million.
The relationship deteriorated when Anthropic refused to guarantee its AI would be available for all legally authorized military uses. The firm drew hard lines against deploying its technology for autonomous weapons systems and mass domestic surveillance operations.
Military officials argued Anthropic should trust the Pentagon to comply with existing legal constraints. CEO Dario Amodei declared Thursday his company “cannot in good conscience” agree to such terms.
OpenAI Steps Into Pentagon Role
OpenAI CEO Sam Altman announced the defense partnership Friday evening on X. He claimed the agreement contains the exact same prohibitions on mass surveillance and autonomous weapons that Anthropic had demanded.
Altman added that OpenAI has urged the administration to apply identical contractual terms across all AI vendors. Elon Musk’s xAI had already obtained clearance for deployment in classified military environments.
OpenAI President Greg Brockman and his wife donated $25 million to a pro-Trump super PAC last year. They remain financial backers of Trump’s artificial intelligence agenda heading into future elections.
Anthropic Threatens Lawsuit
Anthropic said it was “deeply saddened” by the designation and plans to challenge it in court. The company called the decision “legally unsound” and warned it sets a dangerous precedent for U.S. tech firms negotiating with the government.
The General Services Administration removed Anthropic from its approved vendor list for government procurement.
Some critics directed anger at OpenAI’s involvement. Democratic activist Christopher Hale announced on X he was canceling his ChatGPT subscription and switching to Claude Pro Max.
Anthropic was founded in 2021 by former OpenAI researchers concerned about declining safety commitments. Both companies have raised tens of billions in funding recently and are considering initial public offerings.
The dispute also involved a specific incident. After Claude was used in a Venezuela operation this January, an Anthropic employee reached out to a Palantir contact asking about the deployment. Pentagon officials viewed this inquiry as improper meddling.
Anthropic says the exchange was routine technical coordination between partner companies.


