U.S. Defense and State Department officials aim to meet with delegates from at least 50 other nations by mid-2024 to discuss the nascent framework and standards doctrine they’ve recently signed onto, pledging to “responsibly” develop and deploy artificial intelligence and autonomous military technologies, according to a top Pentagon policymaker.
Originally produced in early 2023, State spotlighted the Political Declaration on Responsible Military Use of AI and Autonomy in November — and confirmed then that more than 40 countries formally endorsed it.
“We’re up to 51 now, including the United States, and we’re proud of the fact that it’s not just the usual suspects,” Michael Horowitz, deputy assistant secretary of defense for force development and emerging capabilities, said on Tuesday during a webcast hosted by the Center for Strategic and International Studies.
“We’re actually working toward a potential plenary session in the first half of 2024 with those states that have endorsed the political declaration — and we hope that even more will come on board before that happens, and will come onboard afterwards,” he added.
While State has not shared the full declaration publicly, a summary released last year notes that it sets “voluntary guidelines describing best practices for use of AI in a military context” and is designed to “put in place measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation.”
Previously, the department had confirmed that “endorsing states will meet in the first quarter of 2024 to begin this next phase” of implementing responsible practices associated with the declaration.
Spokespersons from the Pentagon and State Department did not answer DefenseScoop’s questions on Tuesday regarding what this plenary session will involve or why the timeline for it has been seemingly extended.
During the CSIS event, Horowitz also acknowledged that China and Russia are not among the nations participating in the multinational agreement at this point. He did note, however, that the pact made between President Biden and Chinese President Xi Jinping in November to restore some military-to-military communications included a forthcoming meeting “to have conversations about AI safety and general AI capabilities.”
“We think that, again, dialogue between the U.S. and the [People’s Republic of China] is helpful. And wherever the substance of those conversations leads, or focuses, we think that’ll be a good thing,” Horowitz said.
Some concepts underpinning DOD Directive 3000.09 governing “Autonomy in Weapon Systems” — which last year was updated under Horowitz’s leadership, for the first time since 2012 — are essentially reflected in U.S. foreign policy via the new multinational declaration.
A major shift in that directive is that it now explicitly lays out a new senior-level review process for evaluating contemporary applications for military AI ahead of their use.
“When you have multiple undersecretaries and the vice chair [of the Joint Chiefs of Staff] who have to both approve an autonomous weapon system prior to development and approve it prior to fielding — that is really like deep, internal DOD politics — that’s a huge bureaucratic lift, frankly, to do something like that. But it reflects how seriously we take our responsibility when it comes to ensuring that any autonomous weapon systems that are fielded, that we can be confident that they’re safe,” Horowitz said.
He (again) did not comment on whether specific weapon systems have been, are, or will be subject to the freshly updated 3000.09 review process.