Advertisement

A policy gap is threatening the Pentagon’s AI innovation pipeline

The stakes are especially high because the future of U.S. military capability will depend heavily on technologies developed outside the traditional defense industry.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
An aerial view of the Pentagon, Washington, D.C., May 15, 2023. (DoD photo by U.S. Air Force Staff Sgt. John Wright)

Morgan Plummer is Vice President of Policy at Americans for Responsible Innovation. He previously served as a Professor of Practice at the U.S. Air Force Academy and as Managing Director of the Department of Defense’s National Security Innovation Network.

The Trump administration’s recent decision to designate Anthropic a “supply-chain risk” marked an extraordinary escalation in the relationship between Washington and the tech sector. If this precedent holds, it could send a powerful warning to innovators: cooperate with the U.S. government on its terms, or risk being treated as a national security threat.

The episode is troubling on its own, but it also exposes a deeper problem. The United States is operating in a policy vacuum when it comes to governing how artificial intelligence can be used in military systems.

The current framework offers little clarity. Rather than statutory guardrails set by Congress, U.S. policy relies largely on general guidance from the Pentagon calling for “appropriate levels of human judgment” in military systems. That language may sound reassuring, but it leaves critical questions unanswered.

Advertisement

This ambiguity, along with the reality that traditional government contracts are not designed to resolve disputes over the basic rules, forces government agencies and technology companies to interpret the rules themselves.

The stakes are especially high because the future of U.S. military capability will depend heavily on technologies developed outside the traditional defense industry.

Artificial intelligence, advanced software, and autonomous systems are increasingly built not inside traditional defense contractors but in startups, research labs, and technology firms whose primary markets are commercial. Many of these companies have little history working with the defense establishment. They are used to setting terms of service for how their products are used and imposing restrictions when necessary.

In other sectors of dual-use technology, such boundaries are routine. Software companies establish licensing terms. Cloud providers define acceptable uses. Export controls govern where sensitive technologies can be deployed.

But when AI systems developed in the commercial sector are adapted for national security purposes, particularly for military applications involving lethal force, and there are no clear rules to govern their use, boundaries are left to be negotiated in real time.

Advertisement

That murkiness is precisely what set the stage for the current standoff.

The Pentagon’s designation, however, made the situation worse.

Authorities designed to address supply chain risks exist to protect American infrastructure and defense systems from infiltration by foreign adversaries. They are meant to address threats posed by hostile governments embedding vulnerabilities in telecommunications networks, hardware, or software.

Using those authorities against an American company over a dispute about how its technology may be used stretches them far beyond their original purpose.

If that precedent becomes the norm, technology firms will take notice. Companies will reasonably conclude that refusing to provide unrestricted access to their systems could trigger punitive government action.

Advertisement

For many firms, especially younger technology companies with global customer bases and reputations to protect, the rational response will be simple: avoid working with the government altogether.

That would be a strategic mistake for the United States.

Over the past decade, both Silicon Valley and the Pentagon have invested heavily in bridging the cultural and ideological divide that once kept them apart. Initiatives launched under former Defense Secretary Ash Carter, including the creation of the Defense Innovation Unit, were designed specifically to bring startups and technology firms into closer partnership with the national security community. That effort took years of trust-building. Supply chain risk designations, public attacks, and lawsuits, like the ones now unfolding risk undoing that progress, potentially setting the relationship back by decades at a moment when the United States can least afford it.

These deliberate efforts reflected the broader reality that America’s technological edge has long depended on a dynamic relationship between the commercial technology sector and the national security community. From semiconductors to software to satellite systems, many of the innovations that underpin U.S. military strength originated in civilian markets before finding defense applications.

Artificial intelligence is following the same trajectory, as Trump’s own Department of War stated just earlier this year. Which is precisely why the confrontation now unfolding should concern policymakers far beyond this single case.

Advertisement

This is not an isolated dispute; it is the direct result of a policy vacuum that must be addressed with urgency.

Congress should act to address this lack of guardrails by directing the Department of War to establish clear rules of the road. Federal acquisition regulations can ensure that to contract with the department, AI labs must meet baseline safety and governance standards and document intended use cases before operational integration.

This is a logical starting point to preempt future public clashes about the use and misuse of AI systems by the military. Clearer rules would give companies confidence that collaborating with the national security community will not expose them to unpredictable legal or political consequences. And they would ensure that the development of military AI systems proceeds in a way that reflects democratic oversight and accountability.

As lawmakers work on crafting the National Defense Authorization Act, Congress has the opportunity, the authority, and the responsibility to establish these guidelines.

If lawmakers fail to act, disputes like the one now unfolding will only become more common. The result will not simply be policy confusion, but a widening rift between Washington and the innovators whose technologies will define the future battlefield. Clear rules will not slow American innovation — they are the only way to ensure it continues to serve the nation’s security.

Latest Podcasts