Altman Tries to De-escalate AI Dispute Between Anthropic and DoD

Libby Miles
By Libby Miles
February 28, 2026
Altman Tries to De-escalate AI Dispute Between Anthropic and DoD

OpenAI CEO Sam Altman told employees that he wants the company to “try to help de-escalate things” between rival AI firm Anthropic and the U.S. Department of Defense, according to a memo viewed by CNBC and first reported by The Wall Street Journal.

The memo comes as Anthropic faces mounting pressure from the Pentagon over how its artificial intelligence models can be used in defense settings. Altman made clear that OpenAI shares similar limits on certain military applications of AI.

“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions,” Altman wrote. “These are our main red lines.”

Anthropic Faces Deadline Over AI Use

Anthropic has been in tense negotiations with the Defense Department over whether it will allow its AI models to be used in all lawful use cases without limitation. The startup has sought assurances that its technology would not be deployed for fully autonomous weapons or domestic mass surveillance of Americans. The Defense Department has not agreed to those restrictions, according to CNBC.

Anthropic was the first major AI lab to integrate its models into mission workflows on classified networks. The company’s position in the talks has drawn significant attention across the AI sector, particularly among employees concerned about safety and civil liberties.

OpenAI Employees Voice Support

Before Altman sent his memo, some OpenAI employees publicly voiced support for Anthropic. About 70 current staffers signed an open letter titled “We Will Not Be Divided,” which calls for “a shared understanding and solidarity in the face of this pressure” from the department, according to the letter’s website as described by CNBC.

Altman addressed the situation in an interview with CNBC on Friday.

“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters,” Altman said. “I’m not sure where this is going to go.”

His comments reflect the unusual dynamic of one AI company publicly expressing trust in a competitor while both navigate complex defense relationships.

OpenAI’s Pentagon Contract and Next Steps

OpenAI was awarded a $200 million Defense Department contract last year that allows the agency to use its models in nonclassified use cases. Altman said OpenAI will explore whether it can reach an agreement to deploy its models in classified environments in a way that “fits with our principles.”

He wrote that the company would implement technical safeguards and deploy personnel to “ensure things are working correctly.” He also outlined what OpenAI would seek in any expanded agreement.

“We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons,” Altman wrote.

Altman said OpenAI has held meetings on the issue in recent days and that additional discussions with the company’s safety teams were planned. He emphasized that no final decision has been made.

“This is a case where it’s important to me that we do the right thing, not the easy thing that looks strong but is disingenuous,” Altman wrote. “But I realize it may not ‘look good’ for us in the short term, and that there is a lot of nuance and context.”

A Broader Debate Over Military AI

The dispute highlights a growing debate in the artificial intelligence industry over the role of advanced AI models in national security. While the Pentagon has increasingly turned to private AI labs for advanced tools, companies and their employees continue to push for limits on applications involving autonomous weapons and mass surveillance.

How Anthropic resolves its negotiations could influence how other AI firms structure their defense contracts. For OpenAI, the challenge is balancing government partnerships with publicly stated safety principles.

The outcome may help define how the next generation of AI systems is governed in military and classified environments.


Looking for stories that inform and engage? From breaking headlines to fresh perspectives, WaveNewsToday has more to explore. Ride the wave of what’s next.

Latest News

Related Stories