Key Takeaways
- Sam Altman, OpenAI’s CEO, conceded the Defense Department partnership was announced prematurely and appeared ‘opportunistic and sloppy’
- Contract modifications now explicitly prohibit using OpenAI’s technology for domestic surveillance of American citizens
- Defense officials confirmed intelligence agencies including the NSA cannot access OpenAI’s systems under current terms
- The partnership announcement followed Trump’s executive action blocking federal use of Anthropic’s AI platforms
- Altman publicly advocated for extending identical contract provisions to Anthropic
OpenAI Overhauls Defense Department Partnership Following Public Criticism
Sam Altman, chief executive of OpenAI, has publicly acknowledged significant flaws in how his company rolled out its partnership with the Department of Defense. In what he characterized as an internal communication shared via X, Altman expressed regret over the accelerated timeline.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” Altman stated.
The partnership was unveiled on Friday, mere hours following President Donald Trump’s directive prohibiting federal entities from utilizing Anthropic’s artificial intelligence platforms. The announcement also preceded American military operations targeting Iran by just hours.
The problematic timing triggered substantial criticism across social media platforms. Numerous users allegedly removed ChatGPT from their devices and migrated to Anthropic’s Claude application in protest.
OpenAI has entered discussions with Defense Department officials to modify the agreement’s language. These revisions intend to explicitly incorporate the company’s ethical guidelines into the binding contract.
A critical provision now specifies that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” Pentagon representatives have verified that OpenAI’s platforms will remain off-limits to intelligence organizations including the NSA.
Providing services to such agencies would necessitate separate contractual amendments, Altman clarified.
Background: The Anthropic Negotiations Breakdown
These developments emerged from failed negotiations between Anthropic and military officials. Anthropic had requested explicit assurances preventing domestic surveillance applications and autonomous weapon development without human control.
Defense Secretary Pete Hegseth announced Friday that Anthropic would receive a supply-chain threat classification after talks deteriorated. Government representatives had allegedly spent months criticizing Anthropic’s emphasis on AI safety protocols.
Public awareness grew after reports revealed Anthropic’s Claude AI had supported U.S. military operations during a January mission targeting Venezuelan president Nicolás Maduro. Anthropic issued no public statement opposing that deployment.
Anthropic had actually pioneered AI model deployment on the Defense Department’s secure classified infrastructure through an agreement established last year.
Altman Advocates for Competitor’s Fair Treatment
Altman utilized his social media statement to address Anthropic’s predicament. He disclosed weekend conversations with government representatives where he challenged the supply-chain designation.
“I reiterated that Anthropic should not be designated as a supply chain risk, and that we hope the Department of Defense offers them the same terms we’ve agreed to,” he explained.
Anthropic emerged in 2021 when several OpenAI researchers departed following internal conflicts regarding corporate strategy.
The organization has established its identity around prioritizing AI safety considerations. Pentagon officials have not issued public responses to Altman’s proposal for equivalent contract terms.


