Key Points
- Federal Judge Rita Lin granted a preliminary injunction stopping the Pentagon’s prohibition on Anthropic’s Claude AI system
- The court determined the ban constituted “classic illegal First Amendment retaliation”
- The conflict arose when Anthropic declined to permit Claude’s use in lethal autonomous weapons systems or mass surveillance operations
- By 2025, Anthropic commanded 32% of the enterprise AI sector, surpassing OpenAI’s 25% market share
- A seven-day pause on the injunction allows the federal government opportunity to file an appeal
The Trump administration’s prohibition on federal agencies using Anthropic’s artificial intelligence technology has been temporarily halted by a US federal judge, blocking actions that threatened to cost the company billions in potential contracts.
BREAKING: Anthropic has been GRANTED a preliminary injunction re: Pentagon ‘supply chain risk’ designation by Judge Rita Lin in California but is allowing a stay for one week https://t.co/1xk41AB5zQ
— Hadas Gold (@Hadas_Gold) March 26, 2026
On Thursday, Judge Rita Lin from the US District Court for the Northern District of California granted the preliminary injunction. The order includes a seven-day stay to provide the administration an opportunity to pursue an appeal.
At the heart of the dispute is a July 2025 agreement between Anthropic and the Defense Department that would have established Claude as the initial frontier AI system authorized for deployment on classified government networks.
By February 2026, the partnership had collapsed. Military officials sought to restructure the arrangement, insisting that Anthropic permit Claude’s deployment “for all lawful purposes” without any limitations.
Anthropic declined the terms. The company maintained that its AI system should be prohibited from use in lethal autonomous weaponry or widespread domestic surveillance targeting American citizens.
President Trump issued an executive directive on February 27 instructing all federal departments to cease using Anthropic’s products. In a Truth Social post, he stated the company had committed a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
Subsequently, the Defense Department classified Anthropic as a national security supply chain threat. Anthropic responded by filing a lawsuit in Washington, DC federal court on March 9, contending that Defense Secretary Pete Hegseth had exceeded his legal authority.
Court Scrutinizes Government’s Justification
Judge Lin presided over a 90-minute hearing in San Francisco on March 24, pressing government attorneys about whether the company faced punishment for openly challenging Pentagon policies.
In her written decision, Lin concluded the prohibition seemed disconnected from genuine national security considerations. “If operational chain of command integrity is the issue, the Department of War could simply discontinue Claude usage,” she stated.
The judge further characterized the actions as “designed to punish Anthropic” and labeled them “arbitrary, capricious, and an abuse of discretion.”
Throughout the proceedings, an Anthropic representative emphasized that the Pentagon maintains authority to evaluate any AI system prior to deployment. Additionally, Anthropic lacks technical capability to disable a model remotely, alter its functionality, or monitor military applications.
Arguments From Each Party
A federal attorney contended that Anthropic undermined the partnership by attempting to influence Pentagon operational policies during negotiations. The government expressed concerns regarding potential “future sabotage” from the AI company.
Judge Lin dismissed this argument, stating the Justice Department lacked a “legitimate basis” for concluding that Anthropic’s ethical guidelines could transform it into a security threat.
According to Menlo Ventures data, Anthropic controlled 32% of the enterprise AI marketplace in 2025, outpacing OpenAI’s 25% share. A comprehensive federal prohibition threatened to severely undermine that market leadership.
Anthropic expressed gratitude “to the court for moving swiftly.” The company has additionally submitted a separate challenge in a Washington, DC appellate court addressing federal procurement regulations.
The legal case is identified as Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California.


