OpenAI Pentagon Deal Amended After Altman Says It Looked ‘Sloppy’

OpenAI is revising its newly announced agreement to supply artificial intelligence technology to the U.S. Department of War after CEO Sam Altman acknowledged the deal appeared “opportunistic and sloppy.”

The contract, announced shortly after the Pentagon dropped its prior AI contractor Anthropic, sparked backlash among users and employees who feared the company’s technology could be used for domestic mass surveillance or other controversial military purposes.

In a message to employees later reposted publicly, Altman conceded that the agreement had been rushed.

“We shouldn’t have rushed to get this out,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Altman said OpenAI would explicitly bar its models from being used for domestic mass surveillance or by defense intelligence agencies such as the National Security Agency. The company had already stated that one of its red lines was prohibiting the use of its technology to direct autonomous weapons systems without human oversight.

Despite those assurances, critics raised concerns about parallels to past surveillance controversies, including the 2013 disclosures that revealed broad National Security Agency data collection programs.

The deal triggered an online backlash, with some users on social media calling for a “delete ChatGPT” campaign. Competing chatbot Claude, developed by Anthropic, reportedly surged in app store rankings following the controversy.

Internal resistance also surfaced within the tech industry. Nearly 900 employees across OpenAI and Google signed an open letter urging their companies to refuse government demands to use AI models for domestic mass surveillance or autonomous lethal systems. The letter warned that the government was attempting to pressure individual companies by playing them against one another.

The controversy intensified after former OpenAI policy research head Miles Brundage publicly questioned how OpenAI secured a deal that addressed ethical concerns Anthropic had previously described as unacceptable.

Meanwhile, additional federal agencies reportedly moved to phase out Anthropic’s AI products after the Department of War labeled the company a supply chain risk.

The episode underscores mounting tensions between AI developers, government defense agencies, and employees concerned about how rapidly advancing technology is deployed in national security contexts.

MORE STORIES