Army rethinks its approach to AI-enabled risks via Project Linchpin
Through its first program of record to scale artificial intelligence into weapons and other systems — Project Linchpin — the Army is hustling to enable an operational pipeline and an overarching infrastructure for trusted environments where in-house and third-party algorithms can be developed and validated in a responsible, secure manner.
Three senior defense officials provided the latest update on that nascent effort to a small group of reporters during a media roundtable at the Pentagon on Monday.
Details they shared suggest the Army is evolving its approach to known and unknown dangers associated with deploying AI, via Project Linchpin. And in parallel, officials are also producing a new “AI risk reduction framework” to inform all future pursuits.
“[This is] in line with a lot of the work that the White House has pushed out with AI, and that the DOD has pushed out about responsible AI with Task Force Lima and all those types of things. We’re definitely embedded with all those things, but we’re also looking at what are the second- and third-order impacts of things that we’re going to have to address earlier, from an obstacle standpoint,” Young Bang, principal deputy assistant secretary of the Army for acquisition, logistics and technology, explained.
First conceptualized in 2022, Linchpin is ultimately aimed at generating a safe mechanism to continuously integrate government- and industry-made AI and machine learning capabilities into Army programs.
“Think of Project Linchpin as our path to delivering trusted AI,” Bharat Patel, product lead for Project Linchpin at the Army’s program executive office for intelligence, electronic warfare and sensors, told reporters.
“If I can leave you with something right up front — it’s literally all the boring parts of AI. It’s your infrastructure, it’s your standards, it’s your governance, it’s your process. All of those areas are things that we’re taking on, because that’s how you can tap into the AI ecosystem and that’s how you deliver capabilities at scale,” Patel said.
The Army’s Tactical Intelligence Targeting Access Node (TITAN) program, which encompasses its next-generation ground system to capture and dispense sensor data for sensor-to-shooter kill chains, marks the first program that officials seek to enable with algorithms affiliated with Project Linchpin.
“I would say we’re, right now, just collecting AI use cases. TITAN is expected to support a certain theater. We’re working with that theater and that program to determine everything — kind of the left and right limits, and how would we deploy — all that is happening now. But if you think about it, for classic computer vision problems, each theater is different. You can’t think a model for [European Command] is going to work out of the box for [Indo-Pacific Command]. The trees are different, the biosphere is different, all that is different. That’s why it’s super important to get after the use case and where that [area of responsibility] is specifically at. So, we are looking at that very closely because we want to make sure we tailor the model to support the customer,” Patel told DefenseScoop during the roundtable.
Bang, Patel and their team have been conducting what they called “a ton of market research” as part of standing up this new program. Since Nov. 2022, they’ve released four requests for information on Project Linchpin, collected “well over” 500 data points, and met individually with more than 250 companies.
Momentum will continue to build in those aspects in the near term — and possibly also through budget bumps, according to Matt Willis, the director of Army prize competitions and the small business innovation research (SBIR) program.
“In [fiscal 2025], in the next year or so, we’re predicting a significant investment in our SBIR program towards AI in particular — again, strategically aligned with Project Linchpin, [that’s] potentially up to or more than $150 million. So, that’s about 40% of the program, and this really demonstrates our commitment to innovation, to AI and how small businesses across the country can certainly contribute to the Army,” he said.
At the roundtable, the officials also repeatedly emphasized their intent to confront ethical and security risks associated with AI and machine learning with Project Linchpin as it continues to mature.
In that sense, Army officials are also crafting an “AI risk reduction framework” that Bang noted will be designed to get at Army-specific “obstacles” that accompany deploying the emerging technology.
“It’s really a way to identify the risks and mitigate some of those risks — to include data poisoning, injections, and adversarial text attacks. Now, specifically, are you asking ‘Have there been those types of things that we found in Linchpin?’ There are those types of things that we know are out there in the environment or the enterprise. And so whether it’s commercial or on the DOD side, we know they’re out there. So we’re actually trying to mitigate some of those,” Bang told DefenseScoop.
“It’s really a framework to look at what are the cyber risks and vulnerabilities associated with third-party algorithms, and how do we work with industry to categorize that to look at tools and processes to reduce the risks, so then now we can adopt that faster?” he added.
His team has also been hosting a number of engagements with their industry partners to figure out a path forward with a potential need to request AI bill of materials, or AI BOMs from companies.
Such resources are envisioned to essentially help the government better understand potential risks or threat vectors the capabilities could introduce to their networks.
“We’re, again, conducting more sessions with industry. We understand their perspective. And it’s not to reverse engineer any [intellectual property], it’s really for us to get a better handle on the security risk associated with the algorithms. But we do understand industry’s feedback, so we are working really more on an AI summary card. You can think about it more like a baseball card. It’s got certain stats about the algorithm, intended usage and those types of things. So it’s not as detailed or necessarily threatening to industry about IP,” Bang said.