Anthropic CEO Dario Amodei is sharply criticizing rival artificial intelligence company OpenAI over its recent agreement with the U.S. Department of War, accusing the company of misleading the public about the terms of the deal.
In an internal memo to employees, Amodei condemned OpenAI’s public statements regarding the contract, reportedly describing them as “straight up lies.” The memo, first reported by The Information, reflects growing tensions between major AI companies competing for government partnerships.
The dispute centers on OpenAI’s recent agreement to provide artificial intelligence tools to the Department of War after negotiations between Anthropic and the Pentagon broke down.
According to Amodei, Anthropic refused to move forward with expanded cooperation with the department unless the government agreed to strict limits on how the company’s AI technology could be used.
Specifically, Anthropic wanted guarantees that its systems would not be used for domestic mass surveillance or autonomous weapons programs. When those demands were not accepted, negotiations ended.
Following that breakdown, the Department of War reached a deal with OpenAI instead.
OpenAI CEO Sam Altman later announced that the agreement included safeguards addressing the same concerns raised by Anthropic. Altman said the contract included protections designed to prevent misuse of the company’s AI systems.
Amodei disputed that claim in his memo, accusing OpenAI of misrepresenting the nature of the agreement and suggesting the company prioritized internal employee relations over meaningful safeguards.
“The main reason they accepted the deal and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote.
A key disagreement between the companies appears to center on the language used in the contract.
Anthropic objected to language that would allow the government to use its technology for “any lawful use.” OpenAI’s announcement of the agreement stated that its systems could be used for “all lawful purposes.”
OpenAI has argued that domestic mass surveillance would not fall under that definition because such activity is currently illegal. The company said its contract makes that limitation clear.
However, critics of the arrangement say that legal definitions can change over time, potentially weakening those protections if laws governing surveillance or military AI evolve in the future.
The controversy has sparked a broader public debate about the role of artificial intelligence in military operations and government programs.
Following OpenAI’s announcement, some users criticized the company online and called for boycotts of its products. Reports indicated that uninstallations of ChatGPT increased significantly after news of the agreement spread.
Amodei referenced the backlash in his memo, suggesting the public response has favored Anthropic’s decision not to expand its partnership with the government.
The clash highlights the increasingly competitive landscape among AI companies as they pursue major government contracts while facing pressure from employees, regulators, and the public over how their technology may be used.

