Advertisement

Army looking at the possibility of ‘AI BOMs’

Similar to SBOMs, the Army is looking at potentially adopting AI bill of materials.
(Getty Images)

PHILADELPHIA, Pa. — The Army is exploring the possibility of asking commercial companies to open up the hood of their artificial intelligence algorithms as a means of better understanding what’s in them to reduce risk and cyber vulnerabilities.

The notion is referred to as an artificial intelligence bill of materials, or AI BOMs. This stems from software BOMs, or SBOMs, which are lists that provide the components of a piece of software, essentially providing the supply chain “nutritional value” of code so those adopting it into their systems know every piece that it’s made of to better understand potential risks or threat vectors to their networks.

“We’re toying with the notion of an AI BOM [program]. That’s because really, we’re looking at things from a risk perspective. Just like we’re securing our supply chain, semiconductors, components, subcomponents, we’re also thinking about that from a digital perspective,” Young Bang, principal deputy assistant secretary of the Army for acquisition, logistics and technology, told reporters at the Army’s Technical Exchange Meeting X on Thursday.

Bang was sure to clarify that if the Army goes down this path, it’s not seeking to gain access to a vendor’s intellectual property.

Advertisement

“I just want to make sure we’re explicit about this: It’s not to get a vendor’s IP, it’s really about how do we manage the cyber risk and the vulnerabilities,” he said.

AI BOMs pose a slightly more challenging issue than software, he said, because opening the hood could allow someone – not necessarily the Army – to gain access to intellectual property and potentially reverse engineer it.

“Arguably, depending on what you request out of that BOM, you have the ingredients to potentially backwards engineer their algorithm. We don’t want to impinge on their IP. We’re not going to try to reverse engineer and do it ourselves,” Bang said. “The intent is to say, ‘can we look at the observability, the traceability of how you actually develop the algorithms, what are the features and the parameters you tested, what are the datasets that you use to ensure we have more trusted, for lack of better words, outcomes.’ And that there’s no risk like Trojan triggers, poisoned datasets, or prompting of unintentional outcomes over the algorithm. We really need to think about that. But again, while it’s pretty clear on the software and the data side, it’s not about IP.”

The Army is still thinking through how to work with industry in this area, Bang said, noting he held a roundtable with AI companies earlier in the day to hear their perspectives on AI BOMs.

A potential secondary goal for AI BOMs is to help users better understand the training set of the algorithm to explain the result it arrived at.

Advertisement

“Initially, it’s about reducing the risk, but it will help us with the trust or explainability down the road,” Bang said. “We want to think about the outcomes of that behavior a little bit differently and understanding how they did something, might help us get better insights to the outcome so we can actually get to that as well.”

Latest Podcasts