Advertisement

Pentagon CTO urges Anthropic to ‘cross the Rubicon’ on military AI use cases amid ethics dispute

His comments come as the Pentagon is locked in a high-stakes dispute with Anthropic about the U.S. military’s use of the startup’s Claude AI model in real-world operations. 
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
In this illustration, the Claude AI website is seen on a laptop on February 16, 2026 in New York City. According to reports from the Wall Street Journal, the Defense Department used Anthropic's Claude Ai, via its Palantir contract, to help with the attack on Venezuela and capture former President Nicolás Maduro. (Photo illustration by Michael M. Santiago/Getty Images)

The Pentagon will adhere to existing laws and regulations associated with surveillance, security and democratic processes as it fast-tracks the military’s frontier AI adoption, but it won’t permit companies supplying the technology to determine its rules for operation, Undersecretary of Defense for Research and Engineering Emil Michael told DefenseScoop.

His comments come as the Defense Department is locked in a high-stakes dispute with Anthropic about the U.S. military’s use of the startup’s Claude AI model in real-world operations. 

“We want guardrails. We need the guardrails tuned for military applications. You can’t have an AI company sell AI to the Department of War and [then] don’t let it do Department of War things, because we’re in the business of defending the country and defending our troops,” Michael said. “I think if someone wants to make money from the government, from the U.S. Department of War, those guardrails ought to be tuned for our use cases — so long as they’re lawful.”

(Officially changing the department’s name requires an act of Congress, but President Donald Trump last year signed an executive order rebranding DOD as the Department of War.) 

Advertisement

During a meeting with a small group of reporters on the sidelines of the annual Microelectronics Commons summit Thursday, Michael provided updates on the department’s GenAI.mil rollout and pushed for the ethics-related rift between the Pentagon and Anthropic to be resolved.

“I believe and hope that they will ‘cross the Rubicon’ and say, ‘This is common sense. The military has certain use cases. There are laws and regulations that govern how those use cases can be done. We’re willing to comply with them,’” he said.

‘So far, so good’

Broadly, frontier AI encompasses the most advanced, large-scale, and capable foundational models that are rapidly pushing the boundaries of machine intelligence. 

Within that realm is a disruptive technology field — generative AI — that applies models to create convincing but not always correct software code, text, images and other human-like outputs based on patterns in training data and user prompts.

Advertisement

Commercial genAI products were met with near-instant hype when they were first unveiled to the general public in late 2022, and global usage has exploded in the years since then. But experts have also increasingly warned about severe known and unknown threats that the still-emerging tech poses to humanity, privacy, public trust and democracy. 

In an on-stage discussion at the Microelectronics Commons meet-up Thursday, Michael said he divides the department’s AI use cases across three primary echelons. 

“One is corporate enterprise use cases, which any big organization would do for efficiency to enable humans to do more with less time. The second is intelligence. How do we take all the intelligence we have, all the data that we have, and we have a lot of it, and make sense of it to utilize it for something useful? Take a human analyst that can review 1,000 satellite images and let them review 1 million and do anomaly detection,” he told the audience. “And finally, warfighting. How do we do modeling simulation? How do we kind of understand what our operational needs are and gaps? How do we use material science, physics, aerodynamics, to create new systems in ways that we haven’t been able to do before, at the speed that we could do it with AI now?”

Michael, a former Uber executive who also serves as the DOD’s chief technology officer, said he’s ultimately trying to “diffuse the use of this technology throughout the department — which is obviously dependent on a robust supply chain of microelectronics — because we know it’s going to change how we do things for the better.” 

Despite the capabilities’ potential, the U.S. military services’ policies and paths to using genAI have largely been fragmented and stunted by data ownership, acquisition issues and other bureaucratic challenges. But last summer, Pentagon leaders announced individual contracts with OpenAI, Anthropic, Google and xAI — each worth up to $200 million — for “frontier AI projects.” 

Advertisement

Those deals were then billed as enabling the majority of personnel to access some of the most sophisticated commercial genAI options developed by the four companies, including large language models, agentic AI workflows, cloud-based infrastructure and more, with less restrictions than before.

Building on that, in December, the DOD released GenAI.mil to more than 3 million military members, civilian employees and contractors. Google Cloud’s Gemini for Government products were the first to go live and others from xAI, OpenAI and Anthropic were initially expected to soon follow. 

“It’s just beginning — we’ve got about two months under our belt,” Michael told DefenseScoop Thursday. “We’ve got over 1.2 million unique users out of 3 million, which is extraordinary in terms of the thirst people have had in the department for these new capabilities.”

Officials are using genAI for research, document creation, and to build spreadsheets and write job descriptions. 

“It’s a very basic level so far in the short amount of time, but I’m optimistic that those use cases will create more efficiency,” Michael said.

Advertisement

He added that “tens of thousands” of DOD users have registered for training on Google Gemini online to date.

“People sign up and try to understand what the capabilities are. We have policies just like we do for any computer usage about what people can and can’t do on their computer. And we always encourage people, if they’re doing something, to [fact] check what AI has written — just as you do in your consumer life,” the CTO noted. “So I think, you know, so far, so good.”

Questions and concerns

This month, the DOD announced that OpenAI’s ChatGPT will be introduced into GenAI.mil in a near-term but undisclosed timeframe. Pentagon leaders also previously suggested that xAI’s Grok large language models would be released in “early 2026.” 

However, there’s been a seemingly intensifying hold-up with the integration of Anthropic’s Claude offerings into the department’s new platform that’s connected to a disagreement between the entities over the ethical boundaries of models in certain military operations.

Advertisement

Spokespersons for the California-based company did not respond to DefenseScoop’s request for comments Thursday.

But recent reports suggest that Anthropic is resisting pressure from DOD to loosen protections that could prevent the military from deploying Claude AI for operations related to the mass surveillance of Americans or the development of weapons that can fire autonomously, without human intervention.

The Pentagon, in contrast, wants free range to use any genAI products it purchases for what it deems to be “lawful purposes” — without any vendor-imposed restrictions.

“We have a robust set of laws about surveillance in this country that are being run through the democratic process. Congress writes bills, the president signs them, agencies write regulations, and people comply, and we’ve always complied. The Department of War agrees to comply with all those laws that run through the process. What we’re not going to do is let any one company dictate a new set of policies above and beyond what Congress has passed. That is not democratic,” Michael told DefenseScoop. “That is giving any one company control over what new policies are, and that’s for the president, that’s for Congress, and that’s for the agencies to determine how to implement those rules. So, that’s sort of a red herring, in my view.”

There’s also “lots of regulations that have been promulgated for years” in DOD that govern autonomous weapons deployments, he noted.

Advertisement

“If there’s a drone swarm coming out of a military base, what are your options to take it down? If the human reaction time is not fast enough — when the hypersonic missile launches from China, you have 90 seconds to take it down — how are you going to? What is the methodology for doing that in the safest way possible?” Michael said. “Again, you can’t have any one private company dictating those terms. You’ve got to have the laws, the regulations and sort of the mindset of protecting Americans and troops first and foremost — and that’s what our position is and will be.”

Still, the CTO expressed hope that this would mark a waypoint where the company and DOD can “cross the Rubicon” and make a decisive, irreversible choice on model usage for military purposes moving forward.

“I want them to cross the Rubicon too,” Emelia Probasco, a senior fellow at Georgetown’s Center for Security and Emerging Technology (CSET), told DefenseScoop. “I want one of the world’s best, most advanced companies in generative AI on the team.”

She pointed to “plenty of reporting” about how Anthropic’s Claude is currently the only frontier large language model authorized for use on DOD’s classified networks. The startup’s AI is also reportedly deployed for DOD through a partnership with Palantir Technologies, which runs the military’s premier Maven Smart System.

The rift between Anthropic and the Pentagon further soured this week after the company inquired about applications of its technology by U.S. forces in the recent capture of former Venezuelan President Nicolás Maduro. Some experts have raised questions about the legality of that raid in its aftermath.

Advertisement

“People are seeming upset that Anthropic may or may not have asked a question about how their technology was being used. I don’t see why we’re getting upset about that. If they came in and said ‘This is ours, and you have to play by our rules,’ sure, that would make sense [to cause issues] in any contractual relationship. But instead, I think they’re asking — I hope they’re asking — a different question, which is, ‘Hey, this technology is really new, and it is complicated. It succeeds in incredible ways, and it fails in unpredictable ways. What exactly are you trying to do? And can we have a conversation about how to help you do that safely?’” Probasco said. “I think a lot of [these concerns] would disappear if they would just keep talking. But it feels like it’s become a show, as opposed to a contract negotiation.”

Reports also surfaced this week that Defense Secretary Pete Hegseth is considering severing DOD’s work with Anthropic and placing the company on a supply chain risk list that is usually reserved for foreign adversaries. 

Such a move could radically disrupt Anthropic’s entire business.

“The secretary has said the relationship is under review — so, it’s under review. We want all of our American champion AI companies to succeed. I want Anthropic, xAI, OpenAI, and Google to succeed,” Michael told reporters. “And we want to take advantage of all the capabilities that they tell us are going to be world-changing — and I believe will be world-changing — and I think doing that on behalf of the government and Department of War is just not only patriotic, it’s good business.”

Probasco views Hegseth’s threats to designate Anthropic as a supply chain risk to be counterproductive, particularly in the context of China’s aggressive AI development pursuits.

Advertisement

“If there is a real security concern, we should all know about it. But there’s nothing in the way that this is presented that says to me that it is a real security concern. Rather, there seems to be a tussle over control and power,” she told DefenseScoop. “Good news is both sides are powerful, and I’m sure they can find a way to work [together]. Ultimately, the person I worry about is the operators who are being asked to do incredibly dangerous, incredibly complex operations in a world that is adopting AI. We need to figure this out for them.”

Latest Podcasts