While the Department of Defense lags in embracing artificial intelligence, the United States is also moving too slowly in establishing international guidelines for the ethical military use of the technology, according to a House Armed Services Committee member who intends to prioritize these issues during the next Congress.
During a Hudson Institute event on Tuesday, Rep. Seth Moulton, D-Mass., touted the advantages that AI can bring to the U.S. military.
“Every new weapons system should have sort of an AI component or capability or the ability to integrate that, because the reality is that this is the future of warfare. And again, I think we’re starting to find ourselves behind,” said Moulton, a retired Marine. Lawmakers are “trying to look for ways to just force DOD to do this, because they’re clearly not” following this path at the Pentagon, he said.
Simulations, such as the Defense Advanced Research Projects Agency’s AlphaDogfight Trials, have demonstrated that AI-enabled systems can defeat human pilots, he noted.
“The reality is that robots fly planes better in most cases. That’s why every time you go on a commercial flight, I don’t know — 90% of the flight is flown by a computer, not by the pilot,” Moulton said. “We literally live with that in the commercial sector every single day, and yet there’s this huge reluctance to adopt it in the military.”
The Pentagon is pursuing robotic wingmen and other uncrewed systems that are to be equipped with algorithms. However, a variety of manned platforms are also in the works.
“If we were really … looking at the future, we’d be developing a lot more pilotless aircraft than aircraft that have to keep a pilot alive just to do their do their job,” Moulton said. “No matter how you look at it, the AI wins. And we’ve got to come to terms with that and recognize that as a military.”
However, as artificial intelligence advances and the technology proliferates, the United States also needs to lead an international effort to constrain its use, he suggested.
“I want to make one point that I think is especially crucial here, which is that there’s this whole other dimension to the AI discussion, which revolves around the ethics of AI, not just … do pilots trust that AI can fly an airplane. The ethics of AI, of course, govern how it’s used. And this is a place where we’re very, very far behind,” he said.
In the wake of World War II, there were major international initiatives to limit the use and proliferation of nuclear weapons, he noted. Now, in Moulton’s view, there’s a similar imperative to establish norms and treaties to govern how militaries wield artificial intelligence.
AI-enabled weapons may ultimately become even more dangerous than atomic weapons, he warned.
“They’re certainly headed in that trajectory. If we don’t have any constraints around their use, if we do not lead this effort, if we are not the ones who fundamentally set the ground rules agreed to by international treaty for the use and development of AI-enabled weapons and we leave that to adversaries like China — then we’re going to lose because China doesn’t care about civilian casualties. They don’t care as much as we do about collateral damage. And so, therefore, if it’s a free for all, we’re gonna have more restraint on our weapon systems, more restraints than they will on theirs, and theirs will therefore probably be more lethal,” he said.
About two years ago, the Pentagon has crafted a list of “AI Ethical Principles” that it’s supposed to follow as it pursues what the department calls “responsible AI.”
Earlier this year, Diane Staheli was tapped to lead the Responsible AI Division of the new Chief Digital and Artificial Intelligence Office (CDAO), where she will help steer the Defense Department’s development and application of policies, practices, standards and metrics for buying and building AI that is trustworthy and accountable.
In June, Deputy Secretary of Defense Kathleen Hicks signed the Pentagon’s Responsible Artificial Intelligence Strategy and Implementation Pathway.
The DOD has also been engaging with international allies and other organizations as they look to implement their own AI ethics policies. However, some officials and other observers are skeptical that authoritarian adversaries like China and Russia will abide by the same types of rules or restraints as the U.S. and its democratic allies.
But Moulton is banging the drum about the need to create a broader framework at the international level.
“I think it’s a national security imperative, not just a human imperative or an ethical imperative, that we have a major effort to establish the rules around the use and development of AI. And we’re really doing next to nothing in that department. So this is a big effort of mine. This is something that I am pursuing,” he said at the Hudson Institute event.
Crafting these rules shouldn’t just be left to government officials, he suggested.
“I had a long conversation with an AI expert in the private sector about how we can involve the private commercial sector in the development of this ethical framework. And I’m going to be pursuing that amongst some of my other priorities over the next Congress,” Moulton said.