Advertisement

DARPA transitions new technology to shield military AI systems from trickery

The capabilities were born out of the agency's Guaranteeing AI Robustness Against Deception program.
A U.S. Marine holds a Defense Advanced Research Projects Agency Information Innovation Office challenge coin after being presented the coin on Camp Hansen, Okinawa, Japan, Dec 1, 2022. (U.S. Marine Corps photo by Lance Cpl. Christopher R. Lape)

The Defense Advanced Research Projects Agency has transitioned newly developed defensive capabilities to the Chief Digital and Artificial Intelligence Office, according to a senior official.

The tech was born out of DARPA’s Guaranteeing AI Robustness Against Deception (GARD) program, which kicked off a few years ago.

“That is a program that’s focused on building defenses against adversarial attacks on AI systems,” Matt Turek, deputy director for DARPA’s Information Innovation Office, said Wednesday during a virtual event hosted by the Center for Strategic and International Studies.

Officials recognize that U.S. military computer vision and machine learning technologies could be tricked.

Advertisement

“AI systems are made out of software, obviously, right, so they inherit all the cyber vulnerabilities — and those are an important class of vulnerabilities — but [that’s] not what I’m talking about here. There are sort of unique classes of vulnerabilities for AI or autonomous systems, where you can do things like insert noise patterns into sensor data that might cause an AI system to misclassify. So you can essentially by adding noise to an image or a sensor, perhaps break a downstream machine learning algorithm. You can also with knowledge of that algorithm sometimes create physically realizable attacks. So you can generate very purposefully a particular sticker that you could put on a physical object that when the data is collected, when that object shows up in an image, that that particular … adversarial patch makes it so that the machine learning algorithm might not recognize that object exists or might misclassify that tank as a school bus,” Turek explained.

Through GARD, the agency has been working with industry partners to develop algorithms and other capabilities to thwart that type of trickery.

“Deception attacks can enable adversaries to take control of autonomous systems, alter conclusions of ML-based decision support applications, and compromise tools and systems that rely on ML and AI technologies. Current techniques for defending ML and AI have proven brittle due to a focus on individual attack methods and weak methods for testing and evaluation. The GARD program is developing techniques that address the current limitations of defenses and produce ML and AI systems suitable for use in adversarial environments,” budget justification documents state.

Plans called for transitioning GARD-related capabilities to other Defense Department components in fiscal 2024, when the project is slated to wind down, according to justification books.

“Whether that is physically realizable attacks or noise patterns that are added to AI systems, the GARD program has built state-of-the-art defenses against those. Some of those tools and capabilities have been provided to CDAO,” Turek said, referring to the Chief Digital and AI Office.

Advertisement

The CDAO was formed in 2022 to serve as a hub for accelerating the adoption of artificial intelligence and related tech across the Defense Department. That office is a logical transition partner for DARPA, Turek said.

“DARPA’s core mission [is to] prevent and create strategic surprise. So the implication is that we’re looking over the horizon for transformative capabilities. So in some sense, we are very early in the research pipeline, typically. Products that come out of those research programs could go a couple places … Transitioning them to CDAO, for instance, might enable broad transition across the entirety of the DOD,” Turek said. “I think having an organization that can provide some shared resources and capabilities across the department [and] can be a resource or place people can go look for help or tools or capabilities — I think that’s really useful. And from a DARPA perspective gives us a natural transition partner.”

However, the agency has a “multi-pronged” transition strategy that is also intended to aid other organizations outside the Defense Department, he noted.

“We have created new algorithms — some of those actually in partnership both with the research teams that we’re funding but with researchers at Google — and then created open-source tools that we can provide back to the broader community so that we can really raise defenses broadly in AI and machine learning. But those tools [are] also provided to CDAO and then they can be customized for DOD use cases and needs,” Turek said.

The Pentagon requested $10 million in research, development, test and evaluation funding for the program in fiscal 2024. The budget proposal for fiscal 2025 includes no additional money for that because the effort is wrapping up, per the justification books.

Advertisement

The GARD initiative is just one example of DARPA’s pursuit of new artificial intelligence-related technologies. The agency has been working on AI for decades, going back to its early days after the organization was created in 1958, but that area of focus has ramped up in recent years. Today, about 70% of its programs have AI, machine learning or autonomy associated with them, according to officials.

Through its campaign known as AI Next, DARPA has invested more than $2 billion since 2018 to advance AI for national security purposes, according to the agency.

The president’s fiscal 2025 budget request includes an additional $310 million for DARPA’s AI Forward initiative “to research and develop trustworthy technology that operates reliably, interacts appropriately with people, and meets the most pressing national security and societal needs in an ethical manner,” according to a White House fact sheet.

“There is really broad penetration across the agency. So it’s really difficult to sum up, you know, what the agency as a whole is up to, but from an [information innovation office] perspective, we’re really looking to try and advance … how do we get to a highly trustworthy AI that we can bet our lives on and [ensure] that not be a foolish thing to do,” Turek said.

Jon Harper

Written by Jon Harper

Jon Harper is Managing Editor of DefenseScoop, the Scoop News Group’s online publication focused on the Pentagon and its pursuit of new capabilities. He leads an award-winning team of journalists in providing breaking news and in-depth analysis on military technology and the ways in which it is shaping how the Defense Department operates and modernizes. You can also follow him on X (the social media platform formerly known as Twitter) @Jon_Harper_

Latest Podcasts