The U.S. State Department unveiled a new declaration Thursday regarding artificial intelligence and autonomous weapons, in the hopes that other nations will sign on. And one provision calls for keeping the technology from having any involvement in nuclear weapons employment.
The “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy,” released by the department’s Bureau of Arms Control, Verification and Compliance, notes that an increasing number of countries are developing military AI capabilities and could use the tech to enable autonomous systems.
“A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents,” the document states, calling on governments to “take appropriate measures to ensure the responsible development, deployment, and use of their military AI capabilities, including those enabling autonomous systems.”
The term artificial intelligence, according to the drafters of the declaration, refers to “the ability of machines to perform tasks that would otherwise require human intelligence — for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action — whether digitally or as the smart software behind autonomous physical systems.”
In this context, they define autonomy as “a system operating without further human intervention after activation.”
Although not explicitly stated, the declaration implies that giving AI a significant role in strategic weapon systems would be too risky.
Among the list of about a dozen “best practices” promoted in the document is that governments “should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”
The declaration comes on the heels of the Pentagon’s recent update of DOD Directive 3000.09, “Autonomy in Weapon Systems,” which was signed off by Deputy Defense Secretary Kathleen Hicks and went into effect last month.
Unlike the State Department declaration, the Pentagon directive did not explicitly mention nuclear weapons.
Concerns about the potential consequences of handing nuclear command-and-control capabilities over to machines aren’t new. That was a central theme of Stanley Kubrick’s 1964 film “Dr. Strangelove,” in which a Soviet “Doomsday device” automatically unleashes Armageddon in response to a crisis scenario.
However, as artificial intelligence technology improves and defense officials around the world move to incorporate it into their forces for a wide range of potential mission sets, issues surrounding the tech have become more prominent.
In a report released just last week, the Arms Control Association warned about the perils of increased automation of battlefield decision-making, including as it relates command, control and communications (C3).
“Many of these technologies are still in their infancy and prone to often unanticipated malfunctions,” author Michael Klare wrote.
“Given these risks, Chinese, Russian, and U.S. policymakers should be leery of accelerating the automation of their C3 systems. Ideally, government officials and technical experts of the three countries should meet … to consider limitations on the use of any automated decision-making devices with ties to nuclear command systems,” he said.
The Pentagon, which a few years ago rolled out a set of AI ethical principles, has not been pushing to give autonomy to nuclear weapon systems, although it sees a major role for the cutting-edge tech in its conventional forces.
AI and autonomy expert Paul Scharre, vice president and director of studies at the Center for a New American Security think tank, has told this reporter that giving autonomy to nuclear command-and-control systems would be “crazy.”
Lauren Kahn, a research fellow at the Council on Foreign Relations who focuses on emerging tech, praised the release of the declaration in a series of tweets Thursday.
“While non-binding, it will serve as a building block to ensure that there is international coordination on starting to lay down the rules of the road when it comes to AI on the battlefield,” she wrote. “Overall, the statement promotes several important principles that build off DoD AI/autonomy policy, such as ensuring … that there will ALWAYS be positive human control over any nuclear weapons.”
The State Department is hoping to get other countries on the same page when it comes to guiding principles surrounding military applications of AI and autonomy, as it seeks their endorsement of the declaration that was rolled out on Thursday.
After the document was released, Undersecretary of State for Arms Control and International Security Bonnie Jenkins tweeted that the Biden administration is “committed to promoting rules of the road that enhance transparency, predictability, & stability in military use of #AI and we welcome engagement with any country that is interested in joining us in building an international consensus.”
While many nations are pursuing AI that could have military applications, there are currently nine nations that possess or are presumed to possess nukes, including the United States, Russia, China, United Kingdom, France, India, Pakistan, North Korea and Israel.
Kahn tweeted that it “will be interesting to track which states will also voluntarily opt-in and co-sign the statement in the coming months. Will be important for it to not just be US+NATO+Five Eyes but a broader coalition of states, especially those working on the cutting edge of AI development.”