Earlier this week, the Pentagon told Anthropic that the government would cancel its $200 million contract if it did not agree to give it broad access to its AI system, Claude. As Friday’s deadline to accept the terms approaches, CEO Dario Amodei rejected the government’s ultimatum and said “we cannot in good conscience accede to their request.”
In a statement released on Thursday, Amodei said the Pentagon’s latest offer to change their contract
does not satisfy the company’s concerns that its AI could be used for mass surveillance of US citizens or in fully autonomous weapons. Amodei said the Department of Defense has “threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’ —a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal.” The executive pointed out: “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Sean Parnell, the Pentagon spokesman, stated on social media that the Department of Defense has “no interest” in using AI for the two purposes Anthropic outlined, but he reiterated the demand that the government be given access to the AI model “for all lawful purposes.”
Anthropic said in another statement Thursday that it was willing to continue negotiations but that the Pentagon’s new contract language “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”
If Anthropic fails to agree to the military’s demands by 5:01 p.m. ET on Friday, it’s not clear how the government plans to label the company a supply chain risk while simultaneously also invoking the Defense Production Act to force Anthropic to cooperate with the Pentagon.














