Global security firm MITRE is steering two new artificial intelligence-focused “exploratory teams” for NATO’s Information Systems and Technology Panel, an official leading that work told DefenseScoop in an exclusive interview this week.
Dr. Christina Liaghati and her colleagues kicked off those efforts at an event in Paris in March after more than 10 nations and NATO bodies backed their technical activity proposals and concepts.
One exploratory team is centered on accelerating AI adoption and the other — which Liaghati is helping spearhead — focuses on the evolving threat and vulnerability landscape underpinning public and private AI-enabled systems. NATO intends to apply both teams’ findings to inform its 2023 biannual AI strategy update.
“They’re very closely related, because whenever you’re trying to help organizations adopt AI, you want to help them do it in a secure and [assured] way,” she told DefenseScoop.
With bachelor’s and master’s degrees in physics and a doctorate in engineering, Liaghati joined MITRE about seven years ago. Now, as the AI strategy execution and operations manager for MITRE’s AI and Autonomy Innovation Center, she’s generating actionable tools, capabilities, data and frameworks to inform researchers, businesses and the public on existing and potential risks associated with AI applications.
“I think most of the community didn’t really recognize the vulnerabilities that came with deploying these AI-enabled systems until, honestly, the most recent few years here,” she said.
Experts “draw a lot of parallels to the cybersecurity world 20 or 30 years ago” regarding “things that we could have and should have been doing better,” Liaghati explained.
“We’re in this proactive window with AI right now where we can do that, and it’s worth putting the time and energy into it now to be able to make the whole situation better as these AI systems are getting deployed across so many different consequential environments. It’s not just movie recommenders and Snapchat filters anymore. It’s really consequential systems, and especially as we bring them into more government contexts, it’s something that we need to be very deliberate about making sure that we’re doing in a safe and assured way,” she said.
While the public launch of increasingly “intelligent” generative AI products like BardAI and ChatGPT has in recent months demonstrated for people across generational groups “the reality of how [the emerging technology] could touch their lives,” Liaghati noted that the field “hasn’t actually made that many leaps and bounds.”
“It’s not like AI was just born six months ago, right? Like, this is not the case for most of the community. [But] I think — in the last two years in particular — we’ve seen a lot of other applications of computer vision or reinforcement learning or traditional machine learning systems be applied in ways that companies are starting to see the power of it. And it started to immediately raise those questions of security and assurance in parallel — like ‘zero trust’ was the buzzword of the year last year for the cyber community,” she said.
About three years ago, in collaboration with Microsoft and about a dozen other companies, she and the MITRE team began building out an adversarial machine learning threat matrix. The tool evolved over time into the MITRE ATLAS, which takes a similar approach to MITRE ATT&CK’s aggregation mechanism for sharing adversary tactics.
During the interview, Liaghati spotlighted a specific real-world attack through the ATLAS lens to demonstrate how the tool can be used “to show a string of specific actions that a hacker or an adversary might take to take advantage of how AI is being used in our system-of-systems context.”
The Shanghai tax authority in China was swindled out of $77 million over a two-and-a-half year period, she noted.
“This is just two individuals, not even nation-state level actors here, taking advantage of the vulnerability of how they were using facial recognition in their tax system. So, you can kind of see these ATLAS tactics and techniques,” she explained.
Today, the ATLAS community involves more than 80 companies and organizations, and Liaghati’s NATO exploratory team is being built out around it.
“What we’re doing is actually tailoring a few NATO use cases based on the way that NATO is using AI to show them concrete examples of the kinds of attacks that we’ve seen in the real world on different industry or government systems,” she explained.
Through the exploratory team, experts are bringing together perspectives from governments, militaries and companies connected to the NATO partners to inform their work. Officials involved from across the world meet on Zoom often, and in-person at least twice a year. The ultimate hope, according to Liaghati, is to shape a matured NATO research task group, which she described as “kind of a three-year much more deliberate and initiated” entity that exploratory teams can evolve into.
“It’s a pretty broad scope and broad collaboration in this first year, because we wanted to get as many folks involved as wanted to be involved. And then we’re narrowing down our list of priorities for what we really want to do as a community next year,” she said.
Currently, an initial priority that’s surfaced is a need for better threat intelligence-sharing mechanisms across the alliance and between major corporations.
“We’ve even been told by executives at Microsoft and Google, like, ‘I can’t call my friends at Google and say, hey, guys, we just [red-teamed] our system and found this crazy vulnerability that probably impacts you too, and it underpins a lot of important things in the user base that we both touch.’ So we’ve been working on coming up with a way — and this is actually for both the NATO community and the ATLAS community that we’ve been kind of building this out around over the year — to share protected threat intelligence,” Liaghati said.