Advertisement

Experts raise questions and concerns about Pentagon’s threat to blacklist Anthropic amid AI spat

Some warned that such a move would mark an extreme response from the DOD that could have a chilling effect on the broader frontier AI industry.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
Anthropic CEO Dario Amodei looks on during a meeting with France's President Emmanuel Macron on the sidelines of the AI Impact Summit in New Delhi on February 19, 2026. (Photo by Ludovic MARIN / AFP via Getty Images)

Anthropic is holding its ground on specific ethical restrictions for classified military use of its Claude AI products, as the Pentagon’s deadline to resolve their heated dispute looms.

After its CEO announced on Thursday that the company “cannot in good conscience” accede to certain demands that would permit the military to deploy its models for “all lawful use cases” without limitation, it was unclear on Friday how the Pentagon would uphold its leaders’ threats to blacklist Anthropic as a “supply chain risk.”

Multiple sources told DefenseScoop this week that such a move would mark an extreme response from the DOD that could have a chilling effect on the broader frontier AI industry.

“It’s beyond punitive. It’s bullying. The idea of designating one of the great American tech companies to be a supply chain risk is so far beyond the pale that it’s hard to fathom it’s even being considered. One can disagree with their ‘responsible AI’ mantra, though I happen to agree with it, but to go down this path is a level of escalation I never could have imagined,” said a former senior defense official who requested anonymity to speak freely on the matter.

Advertisement

‘A vicious spiral’

Frontier AI refers to the most advanced, large-scale, and capable foundational models that are rapidly pushing the boundaries of machine intelligence. 

The emerging and disruptive field of generative AI, which falls in this realm, applies models to create convincing but not always correct software code, text, images and other human-like outputs based on patterns in training data and user prompts.

Going beyond analysing content by also generating it, genAI capabilities hold massive potential to transform many industries, including defense. But the still-emerging models also pose serious risks to humanity that can range from ethical dilemmas to existential threats.

Last summer, Pentagon leaders unveiled individual contracts with OpenAI, Anthropic, Google and xAI — each worth up to $200 million — for “frontier AI projects.”

Advertisement

Of those four companies, at this time Anthropic’s models are the only one to be integrated into DOD’s classified workflows via a partnership with Palantir. The rest work primarily on unclassified systems.

Tensions between Anthropic and the Pentagon heightened in recent months, reportedly stemming from disagreements over whether and how the military was applying Claude models in certain operations. The situation reportedly boiled over on Tuesday in a heated in-person meeting between Anthropic executives and Defense Secretary Pete Hegseth’s team.

DOD then sent Anthropic “its last and final offer Wednesday night: allow the Pentagon access to Anthropic’s Claude for all lawful purposes,” a senior Pentagon official told DefenseScoop Thursday. DOD requested a response by Friday at 5:01 p.m. Eastern. 

Anthropic’s CEO Dario Amodei released a statement Thursday evening responding to the government’s ultimatum.

Amodei wrote that the company understands that DOD — not private businesses — makes military decisions, and has never raised issue to particular operations or attempted to limit the use of its tech in “an ad hoc manner.”

Advertisement

“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said.

After expanding on those two cases, which involve the mass surveillance of Americans, and separately the application of autonomous weapon systems that can target without human intervention, Amodei said Anthropic would work to ensure a smooth transition if the department opts to offboard its technology.

In a fiery social media post responding to the company’s full statement, Pentagon Chief Technology Officer Emil Michael said “Anthropic is lying” and that DOD wants to be able to “use AI without having to call [Amodei] for permission to shoot down an enemy drone swarms that would kill Americans.”

A former senior defense official told DefenseScoop Friday that Amodei’s note appeared measured, seeming as “a bit of a peace offering.”

“Apparently [Michael] did not see it the same way, which means [Hegseth] will undoubtedly have the same reaction,” they said. “It’s a vicious spiral that is only descending at this point.”

Advertisement

Notably, Amodei acknowledged in his statement that DOD threatened to remove Anthopic from its systems if the company maintains certain safeguards. The CEO said the department also threatened to invoke the Defense Production Act to force the guardrails’ removal, and name Anthropic as a “supply chain risk”— a designation that he said is “reserved for U.S. adversaries, never before applied to an American company.”

“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” Amodei wrote.

The former senior defense official echoed his sentiment.

“I find the threat to designate Anthropic as a supply chain risk to be absurd. On one hand, the secretary wants to claim that Anthropic’s Claude is so critically important to national security that he demands unfettered use of it across the military — while on the other hand, he’s threatening to designate it as a supply chain risk if he doesn’t get what he wants,” they said. “Those two things are in direct tension with each other.” 

Although losing its $200 million unfulfilled contract with DOD would not totally hamstring the AI company, the supply chain risk designation holds potential to severely damage its enterprise business. 

Advertisement

“The consequences of doing this would be far-reaching, with a near 100% certainty of unexpected — and bad — consequences for the military. For one, Palantir would have to pull [Anthropic-supplied elements] from the Maven Smart System,” the former senior defense official said. 

It would also cause every other DOD-related vendor to verify they’re not using Claude, which is said to be used by the majority of government coders. 

“That exercise would not end well. [And] all while China sits in the corner and laughs,” the former senior defense official said. “I’m all for giving the best possible tech to the military and intel community, but I’m sympathetic to Anthropic’s position here.” 

In response to questions from DefenseScoop on Friday, Amos Toh, a senior counsel and manager in the Brennan Center’s Liberty and National Security Program, noted that Anthropic’s usage restrictions on Claude do not simply reflect the company’s ethical red lines. 

Those limitations are also the bare minimum of what is needed, he said, to ensure DOD’s compliance with its obligations under the Constitution, when it comes to using AI to facilitate large-scale collection and analysis of the personal and sensitive data of Americans — and international law, when it comes to automating lethal targeting without sufficient human oversight.

Advertisement

“DOD’s exclusion of Anthropic from defense procurement under 10 U.S. Code § 3252 and the Federal Acquisition Supply Chain Security Act could foreclose the company from working with the department in any capacity, and require other defense contractors to unwind Claude from their systems in order to continue their defense work,” Toh said. “But the defense secretary can only exercise this authority if the designation is required to reduce the risk that adversaries may sabotage or otherwise subvert a national security system (the definition of a “supply chain risk” under § 3252(d)(4).) It is not at all clear how adversaries could exploit Anthropic’s usage restrictions on Claude to sabotage military systems — in fact, these restrictions could reduce the likelihood of misidentification of targets and accidental misfirings, improving the safety and reliability of these systems.”

Additionally, that designation is permitted only if “less intrusive measures are not reasonably available” to mitigate this risk. Toh questioned whether the Pentagon “has made a good faith attempt to pursue ‘less intrusive measures,’” such as by continuing negotiations with Anthropic on its usage restrictions, or canceling the specific contract at issue.

“It is unclear, for example, why Claude’s usage restrictions would prevent the Pentagon from using the model safely in its administrative functions,” Toh said. 

‘A chilling signal’

The Civic AI Security Program is a non-profit organization that uses live software demonstrations to inform the public about AI’s capabilities and dangers.

Advertisement

“From Anthropic’s perspective, there is a lot more at stake here than the DOD thinks,” CivAI Co-Founder Lucas Hansen said.

“Anthropic trains its models to have a certain personality that they believe will be the safest to deploy. This is Anthropic’s emphasis from the start of training, and the model is created with this personality in mind. Guardrails aren’t just tacked on at the last minute of training. The training focuses on these guardrails from the very beginning,” he explained. “Removing guardrails is not as simple as removing training wheels from a bike or turning off a switch. This would be a deep, fundamental change to the model.”

Consumer versions of Claude could also be impacted, Hansen pointed out. To remove the guardrails, Anthropic would need an entirely different training process for the model. 

It would likely be immensely expensive for Anthropic to run two separate training processes each time they want to release a new model, so it’s probable the company would have to change something fundamental about all versions of Claude, he noted.

“Claude’s constitution, which dictates Claude’s behavior, emphasizes that there are certain actions Anthropic won’t ask Claude to do. Anthropic has put in quite a bit of effort into keeping these promises to Claude,” Hansen told DefenseScoop. “Asking Claude to do what the DOD is requesting would amount to breaking those promises, which could significantly affect the behavior of all versions of Claude going forward.”

Advertisement

Spokespersons from Anthropic and DOD did not provide updates or comments about the matter in response to questions from DefenseScoop on Friday.

Tapping into momentum around this topic and at the request of the Information Technology and Innovation Foundation, Morning Consult conducted a nationally representative survey of 1,976 U.S. adults to gauge attitudes around the use of AI in military actions, whether technology companies have a responsibility to set limits on their products, and how Americans view mass surveillance in the modern era. 

According to the results of that study, 50% of participants view penalizing Anthropic as government overreach that sets a dangerous precedent — while 35% say it is necessary for national security, and the rest weren’t aware of the ongoing dispute.

“The broad takeaway from the survey is that Americans support both a strong national defense and meaningful guardrails on AI,” ITIF’s Vice President Daniel Castro said. “There is bipartisan agreement that technology companies have a responsibility to set limits on how their products are used, along with strong support for keeping humans in control of lethal decisions and restricting domestic surveillance. The public isn’t rejecting military AI — it’s signaling that democratic constraints still matter.”

Castro emphasized that the Pentagon is on firm ground in deciding which contractors meet its operational needs. If a company’s product comes with restrictions that DOD considers incompatible with its mission, it can cancel any contract and turn to another supplier. 

Advertisement

“That’s standard procurement practice,” he said. “Where things become more complicated is if enforcement tools, such as a “supply chain risk” designation, are perceived as punishment for a company’s refusal to modify its internal guardrails.” 

If extraordinary authorities are used to pressure firms to abandon their risk controls, “that could send a chilling signal across the broader tech ecosystem,” Castro said. 

“Companies may conclude that working with the federal government requires surrendering independent safeguards or absorbing significant reputational risk,” he told DefenseScoop. “Over time, that could discourage leading technology firms from working with the U.S. military — an outcome that would weaken both defense innovation and America’s broader technological competitiveness.”

More broadly, this episode underscores what Castro refers to as “the trust challenges” surrounding emerging AI in defense supply chains. 

“The Pentagon increasingly relies on commercially developed, dual-use AI systems. That makes clarity essential,” Castro said. “Companies need predictable rules, the military needs operational certainty, and the public needs confidence that oversight mechanisms are durable and transparent.”

Advertisement

Late Friday afternoon, President Donald Trump made additional threats against Anthropic.

The company has “made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump posted on social media, using a secondary name that he authorized to refer to the Department of Defense.

“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” he added.

After Friday’s deadline passed, Hegseth posted on social media that Anthropic’s relationship with the U.S. military and the federal government has been “permanently altered.”

“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service,” he wrote.

Advertisement

“This decision is final,” the Pentagon chief added.

Updated on Feb. 27, 2026, at 5:55 PM: This story has been updated to include comments posted by Defense Secretary Pete Hegseth on social media Friday evening about designating Anthropic a supply-chain risk to national security.

Latest Podcasts