Key Highlights
- Federal authorities designated Anthropic a national security supply-chain threat, mandating all agencies cease using its AI systems immediately.
- Hours later, OpenAI secured a Pentagon agreement to integrate its artificial intelligence models into classified defense networks.
- A $200 million military contract with Anthropic fell apart when the company declined to permit AI deployment for autonomous weaponry or widespread domestic surveillance.
- OpenAI claims its Pentagon arrangement incorporates identical limitations that Anthropic sought, though skeptics doubt the firm’s resolve.
- Anthropic plans legal action against its supply-chain risk classification, describing the decision as lacking legal foundation.
On Friday, federal authorities terminated their partnership with artificial intelligence developer Anthropic, classifying the firm as a national security supply-chain threat. Just hours afterward, competitor OpenAI revealed a fresh agreement to implement its AI technology across the Pentagon’s secure, classified infrastructure.
President Donald Trump issued an executive directive requiring immediate cessation of Anthropic technology use across all federal departments. Organizations currently utilizing the company’s Claude AI systems received a six-month deadline to complete their transition to alternative platforms.
Defense Secretary Pete Hegseth declared via X that Anthropic represents a “Supply-Chain Risk to National Security.” This classification typically applies to entities from hostile foreign nations such as China.
The decision threatens Anthropic’s commercial operations beyond government contracts. Defense contractors may now need to demonstrate they’ve eliminated Claude from their systems entirely. The company counts Nvidia, Amazon, and Google among its major investors and strategic partners.
Anthropic had achieved a milestone as the first AI laboratory authorized to operate models within the Pentagon’s secure classified systems. The July agreement held potential value reaching $200 million.
Negotiations collapsed when Anthropic declined to provide assurances that its AI would remain accessible for any legitimate military application. The company established firm boundaries against autonomous weapon systems and large-scale domestic surveillance operations.
Pentagon officials argued Anthropic should rely on military adherence to existing laws. CEO Dario Amodei stated Thursday that the organization “cannot in good conscience” accept such terms.
OpenAI Captures Pentagon Contract
OpenAI CEO Sam Altman revealed the fresh Pentagon partnership Friday evening through X. He indicated the arrangement contains identical restrictions regarding mass surveillance and autonomous weapons that Anthropic had demanded.
Altman further stated OpenAI requested the government extend matching contract terms to competing AI enterprises. Elon Musk’s xAI had previously received military authorization for classified system deployment.
OpenAI President Greg Brockman and his spouse contributed $25 million to a Trump-aligned political action committee during the previous year. They continue funding efforts supporting Trump’s artificial intelligence policy initiatives in forthcoming elections.
Anthropic Prepares Legal Challenge
Anthropic expressed being “deeply saddened” by the classification and intends to pursue judicial remedies. The firm characterized the determination as “legally unsound” and warned it establishes a troubling precedent for American technology companies engaging in government negotiations.
The General Services Administration announced it will eliminate Anthropic from its authorized product catalogs for federal agencies.
Several commentators criticized OpenAI’s timing. Democratic politician Christopher Hale announced via X his cancellation of ChatGPT membership, switching instead to Claude Pro Max.
Anthropic emerged in 2021 when researchers departed OpenAI citing worries about inadequate safety prioritization. Both organizations have recently secured tens of billions in funding and are separately evaluating potential initial public offerings.
The controversy also involves a particular event. Following Claude’s utilization during a January operation in Venezuela, an Anthropic staff member questioned a Palantir associate about the technology’s application. Pentagon leadership interpreted this inquiry as inappropriate.
Anthropic characterized the communication as standard technical coordination between collaborative partners.


