Key Points
- The Trump administration designated Anthropic as a “supply chain risk,” prompting the AI company to file a federal lawsuit
- This marks the first instance of an American corporation receiving this classification, typically applied to foreign threats
- The conflict originated when Defense Secretary Pete Hegseth ordered Anthropic to eliminate military usage limitations on its AI systems
- The company maintains its stance against deploying technology for autonomous lethal warfare or widespread surveillance operations
- According to Anthropic, the blacklist threatens to eliminate several billion dollars from its projected 2026 earnings
The artificial intelligence firm Anthropic, creator of the Claude AI assistant, has initiated legal proceedings against the Trump administration in California’s federal court system. The legal action follows Monday’s formal classification of Anthropic as a “supply chain risk” by the Department of Defense.
This classification requires all Pentagon contractors to verify they are not utilizing Anthropic’s products or services. The designation represents an unprecedented move against a domestic technology company.
The dispute traces back to demands from Defense Secretary Pete Hegseth that Anthropic eliminate all operational constraints on its artificial intelligence platforms. The company rejected this request, maintaining that Claude would not be deployed for autonomous lethal systems or large-scale domestic monitoring programs.
These protective measures were explicitly included in Anthropic’s initial federal agreements. Company representatives emphasized that Claude has “never been tested” for military applications and expressed uncertainty about the system’s reliability in such contexts.
Anthtropic secured a $200 million federal contract with the Defense Department in July 2024. The company also became the inaugural AI laboratory to implement its technology within the Pentagon’s secure classified infrastructure.
President Trump issued a social media statement ordering all federal departments to “immediately cease” utilizing Anthropic’s products. His statement characterized Anthropic as a “Radical Left AI company.”
Financial Impact
Krishna Rao, Anthropic’s Chief Financial Officer, disclosed in legal documents that government actions could diminish the company’s 2026 revenue “by multiple billions of dollars.” The filing indicates that current federal agreements are already facing termination.
Commercial sector partnerships face similar jeopardy. Company officials stated the circumstances threaten “hundreds of millions of dollars” in immediate-term income.
Anthropic has petitioned the court to overturn the supply chain risk classification and implement a temporary hold during litigation proceedings. The organization submitted additional paperwork to the U.S. Court of Appeals located in Washington D.C.
Over a dozen federal departments face naming in the lawsuit, encompassing the Treasury Department, the State Department, and the General Services Administration.
Industry Support
More than 30 artificial intelligence researchers and engineers from OpenAI and Google submitted an amicus brief backing Anthropic on Monday. Google’s chief scientist Jeff Dean participated as a signatory.
The coalition cautioned that penalizing a prominent American AI enterprise could undermine the nation’s technological leadership position.
Despite the blacklist status, Anthropic’s technology allegedly continued supporting U.S. military activities in Iran, based on earlier CNBC coverage.
Amazon verified that Anthropic’s Claude continues to be accessible for AWS clients in non-defense applications.
Anthtropic indicated it remains committed to seeking diplomatic resolution with government officials while simultaneously pursuing legal remedies.


