Updated February 27, 2026 at 1:27 PM AKST
In a post on Truth Social, President Trump ordered the U.S. government to stop using the artificial intelligence company Anthropic's products.
The decision came following a dispute between Anthropic and the Pentagon over whether the company could prohibit its tools from being used in mass surveillance of American citizens or to create autonomous weapon systems, as part of a military contract worth up to $200 million.
"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote in his post. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!"
He said there would be a six-month phaseout of Anthropic's products.
The announcement came about an hour before a deadline set by the Pentagon, which had called on Anthropic to back down.
And it happened as at least one other AI firm said it had similar concerns about the military uses of AI. Earlier in the day, OpenAI CEO Sam Altman said he shares Anthropic's "red lines" restricting military use of AI.
The Pentagon said it had nothing to add to the president's announcement. Anthropic didn't respond to an immediate request for comment.
Ban comes as Anthropic plans an IPO
Defense Department officials had given Anthropic a deadline of 5:01 p.m. ET on Friday to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes."
The government had also threatened to invoke the Korean War-era Defense Production Act to compel Anthropic to allow use of its tools and, at the same time, warned it would label Anthropic a "supply chain risk," potentially blacklisting it from lucrative government contracts. President Trump made no mention of either threat in his Truth Social post.
The government ban comes at a time when Anthropic is under heightened scrutiny, since the company, which is valued at $380 billion, is planning to go public this year. While the Pentagon contract worth as much as $200 million is a relatively small portion of Anthropic's $14 billion in revenue, it's unclear how the friction with the administration will sit with investors or affect other deals the company has to license its AI model. Anthropic CEO Dario Amodei has pointed out that the company's valuation and revenue have only grown since it took a stand against Trump officials over how AI can be deployed on the battlefield.
Whether AI companies can set restrictions on how the government uses their technology has emerged as a major sticking point in recent months between Anthropic and the Trump administration.
On Thursday, Amodei said the company would not budge in the face of the Pentagon's threats. "We cannot in good conscience accede to their request," he wrote in a lengthy statement.
"Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said, using the Pentagon's "Department of War" rebranding. But, he added, domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do."
Emil Michael, the Pentagon's undersecretary for research and engineering, shot back in a post on X, accusing Amodei of lying and having a "God-complex."
"He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote.
In an interview with CBS News, Michael said federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons."
"At some level, you have to trust your military to do the right thing," he said.
OpenAI expressed similar concerns
Any effort by the Pentagon to replace Anthropic could be complicated by Altman's support for the safety guardrails Anthropic set. OpenAI, Google, and xAI also have Defense Department contracts and have agreed to allow their AI tools to be used in any "lawful" scenarios. This week, xAI became the second company after Anthropic to be approved for use in classified settings.
Altman told CNBC on Friday morning that it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with."
In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff.
The Defense Department didn't respond to a request for comment on Altman's statements.
Independent experts say the standoff is highly unusual in the world of Pentagon contracting.
"This is different for sure," said Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, a Washington DC think tank. Pentagon contractors don't usually get to tell the Defense Department how their products and services can be used, he notes "because otherwise you'd be negotiating use cases for every contract, and that's not reasonable to expect."
At the same time, McGinn noted, artificial intelligence is a new and largely untested technology. "This is a very unusual, very public fight," he said. "I think it's reflective of the nature of AI."
NPR's Bobby Allyn contributed to this report.
Copyright 2026 NPR