Advertisement

Army’s ISR Task Force looking to apply AI to intel data sets

Service officials said they will apply ethical principles when using AI with intelligence and targeting data sets.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
U.S. Army Soldiers, assigned to the 6th Squadron, 8th Cavalry Regiment, and the Artificial Intelligence Integration Center, conduct drone test flights and software troubleshooting during Allied Spirit 24 at the Hohenfels Training Area, Joint Multinational Readiness Center, Germany, March 6, 2024. (U.S. Army photo by Micah Wilson)

One of, if not the top, priority for the Army’s intelligence task force is figuring out how to put AI against the vast amounts of data collected from a variety of platforms.

“We are drowning in data. I see data as both a challenge but also an opportunity. That’s essentially what AI is for our Army. It’s an opportunity. AI presents opportunities for progress more than any other technology we have seen in the last few decades,” Lt. Gen. Anthony Hale, deputy chief of staff, G2, said during a presentation at the annual AUSA conference Wednesday.

To quantify it, the world will have somewhere around 180 zettabytes (a zettabyte is 1,000,000,000,000,000,000,000 bytes) of data by the end of 2025, Andrew Evans, the director of the Army’s Intelligence, Surveillance and Reconnaissance Task Force, said in an interview at the conference.

The task force — a temporary entity set up initially to act as a cross-cutting enabler that will sunset to a permanent directorate inside the G2 once the newly established all-domain sensing cross-functional team reaches full operational capability — is taking Army senior leadership’s direction to not just throw people at that problem, but use AI to unburden the analysts.

Advertisement

“One of our key missions in the intel enterprise in terms of transformation, is figuring out how to leverage artificial intelligence to attack that data in the right ways, impactful ways and ethical ways, things that you have to consider when you do the data piece,” Evans said. “AI is going to be a big focus for us moving forward. We could put a million humans against that and the data will always grow at an astronomical rate, beyond what the humans can do.”

There is a sense of urgency behind this effort as well. According to Hale, there could be a “fight tonight” in three of the military’s six geographic combatant commands: Indo-Pacific Command, Central Command and European Command.

“This is the driving pace for transformation in our Army. It’s the urgency that you hear the chief of staff in the Army talk about every day,” he said. “We must learn to leverage AI to organize the world’s information, reduce manpower requirements, make it useful, and position our people for speed and accuracy and delivering information to the commander for decision dominance.”

One area the Army is trying to improve is its processing, exploitation and dissemination, or PED, process.

The service wants to reimagine PED to get away from the counterinsurgency era — where assets could loiter over a target for days, develop patterns of life of targets and pass data over a permissive network — towards new concepts like multi-intelligence data fusion and analysis.

Advertisement

“When we talk about multi-INT data fusion and analysis we talk about how do we apply realistic AI-enabled technologies or machine learning models against real operational requirements today,” Col. Brandon VanOrden, chief of intelligence operations, deputy chief of staff, G2, said at the conference. “We need help from this group to do things like vertically layer those ML models and then horizontally, integrate them across platforms and programs to apply again, against real, finite resources and prioritized operation requirements in our assigned theaters.”

VanOrden also explained that the Army wants a “conversation” with data. This means having the ability to ask informed and operational questions rather than looking at data for data’s sake.

In a hypothetical example, he envisions an analyst asking a data set where a unit is, what it’s doing and where it might be going next. The data should also be able to tell the analyst what happens if conditions change and how that might affect the probability of what that unit will do.

Ultimately, it comes down to providing better context for that human analyst.

One of the major hurdles for the military in applying AI for intelligence purposes is being able to train the algorithms against the data that resides in highly classified environments.

Advertisement

“One of the biggest challenges is that you have to train your algorithms against data of military grade. If you’re talking about highly classified data, the algorithms, in most cases, some of the stuff that you would see around AUSA today is trained against commercially available data,” Evans said. “That does not always represent military data. Your algorithm may be trained differently than how it will be employed, if you think about that. One of the things we have to do as intel professionals [is] help bridge that gap. How do we ensure that an algorithm that we may be interested in integrating has been trained against data of military value? We’re trying to work that out right now.”

Hale noted that the Army needs industry as it’s looking for data analytics, security, generative AI and large language models to assist in the service’s challenges.

While there are plenty of AI providers, Evans said, the question becomes, who are the trusted ones that have trained against the right types of data sets, and then when that algorithm is employed, will it learn? Algorithms can’t be a one-and-done thing, he added, noting they have to learn as they go.

According to Evans, one of the first efforts aimed at providing avenues for industry is a governance process that doesn’t stifle innovation or rapid employment.

“How do we create governance that’s rapid and responsive, so that when we do integrate AI, it’s being done ethically, but it’s still keeping pace with technology?” he said. “Then, how do we give vendors a space where they can come, bring their algorithms and models, test it and validate it against military-grade data, and then deploy it and allow users to download it and use it on their data set?”

Advertisement

The program executive office for intelligence, electronic warfare and sensors is in the process of building an AI and machine learning ecosystem to allow for a trusted and safe environment for other program managers and Army elements to bring their models to go against curated and trusted data. It will be in an environment where they’re trained and verified, ensuring the right security wrappers to understand if there’s any types of drift or things getting out of tolerance from those models, Brig. Gen. Ed Barker, the PEO, said.

“As we build that out and establish that trusted environment we’re looking at ways to really get at what we’ve talked about a little bit here with regard to PED,” he said. “The goal is really to do that hard work and not have, through AI and ML, and not place that burden on our analyst. Really allow that analyst to get at the high-level analysis that we want them to, to provide that context right that really only the human can do.”

Barker noted there are uses cases that officials are working with U.S. Army Pacific, partnering with the National Reconnaissance Office, National Geospatial-Intelligence Agency and the Chief Digital and AI Office to help solve the PED problems in the Pacific region.

Ethical principles

Officials noted that AI won’t totally replace military service members, especially when it comes to targeting. They stressed that the Army will apply ethical principles to these algorithms and ensure the technology is never responsible for pulling the trigger, but rather, aiding humans’ decision-making processes.

Advertisement

“We must also execute these efforts responsibly. Like [Defense] Secretary [Lloyd] Austin says, responsible AI is the place where cutting-edge technology meets our timeless values,” Hale said.

Evans noted that machines do computation very well while humans apply discretion and values-based assessments.

Guarding against AI hallucination and building trust against those algorithms with be a big challenge and focus for the Army.

“Hallucination, it is a thing right now. We are probably technologically capable … of doing a lot of things fast, but I know from the way the U.S. wants to fight by laws of armed conflict, we are quite a distance right now from trusting AI and the ability to automate our targeting in that respect,” Brig. Gen. Rory Crooks, director of the long-range precision fires cross-functional team, said during the conference. “We must build trust.”

Providing hypothetical examples and potential use cases for how AI can aid analysts, Evans said the machines can alert humans to look at a particular problem for which the soldier can apply discretion.

Advertisement

“Is this a problem? And if it is, how do I want to respond to this problem?” he said. “What we need humans to do is look at what the machine has nominated as a potential hotspot, problem, target, threat, name your condition, and then to apply a value to that and say, ‘Yep, that is and yes I’m going to take action.’”

One example could be in the vast expanses of the Pacific ocean, where an analyst could be tasked to track hostile or aggressive ships across a 700,000 square mile set of satellite imagery. Rather than task humans with looking at blue squares for endless hours, the algorithm can identify almost instantaneously where ships are.

In setting up certain rules for the AI, it could determine based on the data set the most important areas for human interrogation. If humans aren’t happy with that or want more data, they can ask for more nominations.

A tricky issue going forward, however, is what if the AI didn’t make a nomination?

“[My] personal view of this is where we’re going to find the deepest learning on AI is not … when it provides a nomination, it’s what happens if it didn’t. How do you know if it didn’t nominate? Is there still a threat out there that you need to go look at?” Evans said. “That’s where we’re going to have to just learn as we go on this … Trust is going to be about repeatability and then verifying that the information provided was complete. No one really likes to talk about that, but a complete assessment from a machine is a vital part of building trust. If it’s incomplete, you can take action, you can do all the right ethical things, but you’re still actioning on an incomplete data set. We have to make sure that it’s looking at the totality [of] what you need it to look at.”

Latest Podcasts