Diving headfirst into the emerging, uncertain opportunities posed by generative artificial intelligence, experts at Marine Corps University (MCU) are experimenting with large language models and associated capabilities that they say hold potential to completely transform military wargaming.
In interviews on the sidelines of this week’s Modern Day Marine convention, Kevin Williamson and Joel Corrente — two wargaming subject matter experts for MCU — briefed DefenseScoop on efforts and plans to incorporate generative AI to support and innovate existing simulations.
“I think this conversation really gets complicated because people always want to jump to the coolest thing possible — instead of the first step,” Williamson said.
A nascent technology subfield underpinning the making of large language models that can generate audio, software code, images, text, videos and other media when humans prompt them to, generative AI products (like OpenAI’s ChatGPT or Google’s BardAI) have exploded in popularity over the last six months and continue to get more “intelligent” and sophisticated as they are trained.
At Marine Corps University in Quantico, Virginia, “we’re taking things like large language models, like ChatGPT right, and incorporating them into the simulations,” Corrente explained. In his view, there are “multiple avenues there” for applications — and in particular right now “providing better analytics and analyzation of what’s happening [in simulations and wargames], as well as providing a lower level of entry for the average person to use the software.”
Students at the university are Marine officers continuing their professional military education. Educators use a variety of software and systems for simulations and wargames.
“These are different complex high-level simulations,” Corrente explained, pointing to a massive interactive digital wargaming and tabletop installation at Modern Day Marine.
One application MCU uses is known as Command: Professional Edition — an all-domain simulation software that can be trained and customized to provide students with a clearer picture of how real-world missions or exercises might unfold. The app was demonstrated for DefenseScoop at the convention.
Largely, that program is used to teach joint warfighting to majors as they go up into staff level billets. Therefore, Williamson noted, “it’s less about playing the game for them and more about getting familiar with making the plans” that could enable future missions.
“That’s really where the education happens, right? Those students begin to see the ‘unknown unknowns,’ and they begin to become the ‘knowns’ as you hit those friction points and you realize the things you maybe didn’t think about in the early part of the planning process. So, a lot of what we do utilizing these different tools is for the planning process — how do we make America’s war planners better at planning war,” Corrente said.
“And for me, in my experience in professional military education — and I think this is true in any education — the more immersion you can provide, the better that lesson sticks inside your head,” he added.
Notably, while it does use data from real U.S. military assets and is continuously updated by developers, the Command app is “not actually like a machine learning software, it’s just a regular software,” Williamson told DefenseScoop.
“The easiest way to think about it is when you see an airframe inside of Command or ship inside of Command, it’s not just a single entity, it’s made up of different components. The sensors, the munitions, everything separate from the airframe, they’re just kind of like added on. So when our pro clients know what the actual classified specs are of these platforms, they can just go in and change the radar cross section, or whatever they know to be true,” he said.
Looking to the future though, Williamson and others at the university are already puzzling out how Command and other similar tools already in use, can be elevated via advanced machine learning and generative AI.
“The lead dev is exploring OpenAI right now — and other language models — for user experience within Command because, right now, Command’s pretty complicated to learn. It’s not like you can just pick it up and play it,” Williamson said.
“The idea is, at some point in the future — and this is like years down the road, if it’s even viable — we want to create basically an interface socket between the AI and the software so that a major or captain can just write what they want with the platforms they have, and it should be able to do the inputs for them,” he explained.
Separately, Williamson is also working on a passion project of his own involving generative AI.
“I’m doing a side research project on a prototype that I can’t name because it’s not announced yet, but it should be coming out soon. It’s an LLM that’s being designed specifically for AI generative wargaming and Defense Department applications,” he said.
The system will be built to be used in secured networks, and point to hyperlinks, sources and data to explain the prompts it supplies to humans.
“So, you can cross-reference it. But what it does that I’m trying to research is, I can give it a target, I can give it a longitude/latitude, I can tell it six of these airframes, I don’t even have to tell it what it’s loaded out with — I just give it a target location, say where my aircraft are located at and just ask it to build a strike package, and it’ll actually build a strike package in air tasking order form by timeline,” Williamson said.
Broadly, strike packages plan out all the military assets needed to hit a target during a mission.
The LLM research Williamson is working on would also provide a capability to tell humans “the critical parts of the strike package to make it work,” he explained.
“So if you’ve got Growlers that are jamming air defenses, it’ll highlight that Growlers are an important part of the strike package,” Williamson said.
He recently had a pool of experts submit “safe files for a very basic scenario.” Now, he said he’s “about to start having the AI write its own strike packages in the same scenario, and compare the efficiency between the users and the language model.”
Part of what inspired this research was Williamson seeing the wide variety of different officers and their approaches, trained via MCU.
“For our last exercise, we had an infantry officer who was in charge of the air component. He has no idea about how to conduct an air campaign, he has no idea about aircraft. But if he’s able to use something like this — tell it what he has, tell it what he wants it to do, and it can give him at least a suggestion for him to follow — it’s like a training aid. I kind of see it as like a new generation Blackberry, almost. When it came out, everybody’s got their personal assistant,” he said.
In the interviews with DefenseScoop, Williamson and Corrente also emphasized that they are experimenting responsibly and thinking seriously about the possible risks that come with exploring generative AI.
“My personal opinion is that AI is good — but it should always augment [human efforts], not replace. And I think that there’s a lot of confusion in that space because it seems like people talk in absolutes. It’s either Skynet [like in the Terminator film franchise], or it’s not working at all. There’s a lot of gray area in between where this might not work for a large organization, but for training and education if it’s a good use case, and I can prove that it works, then there’s no reason not to try it,” Williamson told DefenseScoop.
Updated on June 29, 2023, at 11:45 AM: This story has been updated to clarify that the name of the app that MCU is using is called “Command: Professional Edition.”