TLDR
- Pentagon’s Chief Technology Officer claims Claude AI contains inherent policy biases that may undermine military operations.
- The Defense Department marked Anthropic as a supply chain threat—an unprecedented action against a domestic tech firm.
- All Pentagon contractors must now verify they aren’t incorporating Claude into defense-related projects.
- On Monday, Anthropic filed litigation against the Trump administration, describing the action as “unprecedented and unlawful” with hundreds of millions at stake.
- Palantir’s CEO revealed his firm continues deploying Claude for U.S. military purposes despite the prohibition.
Earlier this month, the Pentagon placed Anthropic on its supply chain risk list, representing the first time an American technology company has received such a designation. This classification has traditionally been reserved for foreign threats.
During a Thursday appearance on CNBC’s “Squawk Box,” Defense Department Chief Technology Officer Emil Michael provided rationale for the controversial decision. According to Michael, Claude’s foundational “constitution”—the framework Anthropic employs to guide the AI’s responses—embeds policy viewpoints that could influence the system’s performance in defense applications.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said.
In January 2026, Anthropic released the latest iteration of Claude’s constitutional document. The organization describes this framework as having a “crucial role” in model development and states it “directly shapes Claude’s behavior.”
Under this supply chain classification, defense industry partners and suppliers must formally attest that Claude is absent from any Pentagon-commissioned work.
Michael said the decision was “not meant to be punitive.” He also noted that the U.S. government accounts for only a “tiny fraction” of Anthropic’s overall revenue.
Former OpenAI researchers established Anthropic in 2021. The company has since developed significant enterprise partnerships, including initial agreements with the Defense Department.
Anthropic mounted a vigorous response to the Pentagon’s action. The company initiated legal proceedings Monday against the Trump administration, characterizing the supply chain label as “unprecedented and unlawful.”
According to court documents, Anthropic asserts it faces “irreparable” damage with contract values reaching hundreds of millions of dollars now uncertain.
Pentagon Refutes Claims of Company Outreach
Michael rejected Anthropic’s assertions that government officials were proactively contacting businesses to discourage Claude usage. He characterized such allegations as “rumors.”
“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” Michael said.
He further recognized that phasing out Claude will require significant time. The DOD has established a transition strategy, he explained, emphasizing that extracting deeply embedded AI systems differs substantially from uninstalling simple software.
Claude Remains Active in Defense Operations
Notwithstanding the designation, Claude continues to function in certain military capacities. CNBC reported earlier that the AI platform supported U.S. military activities in Iran.
On Thursday, Palantir CEO Alex Karp acknowledged that his company, among America’s premier defense contractors, maintains its use of Claude.
Michael said the agency cannot “just rip out” Anthropic’s technology overnight and confirmed a transition plan is underway.


