CDAO shapes new tools to inform Pentagon’s autonomous weapon reviews
The Chief Digital and Artificial Intelligence Office team behind the Pentagon’s nascent Responsible AI Toolkit is producing new, associated materials to help defense officials determine if capabilities adhere to certain mandates in the latest 3000.09 policy directive that governs the military’s making and adoption of lethal autonomous weapons.
“Obviously, the 3000.09 process is not optional. But in terms of how you demonstrate that you are meeting those requirements — we wanted to provide a resource [to help],” Matthew Johnson, the CDAO’s acting Responsible AI chief, told DefenseScoop in a recent interview.
The overarching toolkit Johnson and his colleagues have developed — and will continue to expand — marks a major deliverable of the Defense Department’s RAI Strategy and Implementation Pathway, which Deputy Secretary Kathleen Hicks signed in June 2022. That framework was conceptualized to help defense personnel confront known and unknown risks posed by still-emerging AI technologies, without completely stifling innovation.
Ultimately, the RAI Toolkit is designed to offer a centralized process for tracking and aligning projects to the DOD’s AI Ethical Principles and other guidance on related best practices.
Building on early success and widespread use of that original RAI toolkit, Johnson and his team are now generating what he told DefenseScoop are “different versions of the toolkit for different parties, or personas, or use cases” — such as one explicitly for defense acquisition professionals.
“It’s not to say that these different versions that kind of come out of the foundational one are all going to be publicly released,” Johnson said. “There will be versions that have to live at higher classification levels.”
One of those in-the-works versions that will likely be classified once completed, he confirmed, will pertain to DOD Directive 3000.09.
In January 2023, the first-ever update to the department’s long-standing official policy for “Autonomy in Weapon Systems” went into effect. Broadly, the directive assigns senior defense officials with specific responsibilities to oversee and review the development, acquisition, testing and fielding of autonomous weapon platforms built to engage military targets without troops intervening.
“So, that came out as the official policy. This isn’t like the official toolkit that operationalizes it. This is a kind of voluntary, optional resource that my team [is moving to offer],” Johnson said.
The directive’s sixth requirement mandates that staff have plans in place to ensure consistency with the DOD AI Ethical Principles and the Responsible AI Strategy and Implementation Pathway for weapons systems incorporating AI capabilities — and incorporate them in pre-development and pre-fielding reviews.
“We’re just providing a kind of resource or toolkit that enables you to demonstrate how you have met that requirement for either of those two reviews,” Johnson said.
“Basically what we’re developing is something very similar to what you see in the public version of the toolkit — where, basically, you have assessments and checklists and those route you to certain tools to engage with, and then those can be basically pulled forward and rolled up into a package that can either show how you’re meeting requirement 6, or actually how you’re meeting all of the requirements,” he explained.
Recognizing that “there’s certainly some overlap that can happen between the requirements,” Johnson said his team also wants “to provide basically an optional resource you can use to either show how you’re meeting requirement 6, or how you’re meeting all the requirements — through a process that basically eliminates, as much as possible, some of those redundancies in your answers.”
These assets are envisioned to particularly support officials who are packaging 3000.09-aligned pre-development and pre-fielding reviews.
“This is the first kind of policy that has a review process, that has a requirement to be able to demonstrate alignment or consistency with the DOD AI ethical principles — and so, what we’re really interested in here is kind of collecting lessons learned [about] what having a requirement like this does for overall mission success and what using the toolkit to meet a requirement like this does for mission success. And we’re hoping to basically acquire some really good data for this that will help us refine the toolkit and help us understand basically, like, is this a good requirement for future policies and what future policies should have a requirement like this?” Johnson told DefenseScoop.