An 82nd Airborne unit built its own AI tools as the Army pushes a ‘safe playground’ for soldiers to experiment with the tech
Since October, an intelligence battalion in the 82nd Airborne Division has been using home-grown AI tools that unit leaders say reduce staff work and make it easier to support the “All American” requirement of being able to deploy anywhere across the globe within 18 hours.
Officials told DefenseScoop in a recent interview that they’ve used the capabilities — created in Maven Smart System — to aggregate personnel, equipment and property management information across the unit, reducing “the amount of monotonous staff work” for soldiers, according to Maj. Brandon Smith, executive officer for the 319th Intelligence and Electronic Warfare Battalion at Fort Bragg, North Carolina.
Staff can produce a “live snapshot” of various metrics across the unit, Smith said, which help commanders make faster decisions and give them a better understanding of how deployable the unit is, including for one of the 82nd’s most important functions — the immediate response force, or IRF.
“There has to be a better way to do this,” said Warrant Officer Charles Davis, the chief maintenance officer for the unit and who built the tools for several data analysis functions. “I discovered Maven, saw the potential in there and no one told me I couldn’t, so I started making tools to see how we could solve some of these needs within the battalion.”
Officials noted these tools have begun to trickle up to the brigade level.
The intel battalion’s use of AI capabilities is one the Army is trying to emulate across the force. While systems like Maven, which incorporate large language models and machine learning, predate the second Trump administration, they have buoyed what Defense Secretary Pete Hegseth has called a “new era” of “mass AI adoption across” the military in the name of efficiency.
AI academics, tech experts, and even some troops have raised concerns about the Pentagon’s aggressive approach to integrating the emerging technology into an institution as sprawling and powerful as the American military so quickly. Many of those concerns include data security, guardrails and AI platforms’ propensity to “hallucinate” information.
The Defense Department and Anthropic, a company that owns a commercial model used by the military known as Claude, are currently waging a public feud over its safeguards. Following the release of GenAI.mil, some military officials and troops lamented the department’s lack of training for a technology shaping up to be the most consequential of the 21st century. Data, the fuel for AI models, presents another risk, which could make something like a unit snapshot additional fodder for adversaries should security protocols around the new tech fail.
Achieving a balance between soldier-led experimentation and AI safety is part of the “daily grind” for the Army’s chief information officer Leonel Garciga and his staff, he told DefenseScoop in a recent interview.
“You got to build a safe playground that maybe has some rubber mulch, maybe doesn’t have some sharp edges, but it’s still kind of a jungle gym that people can kind of go wild on,” he said. “Definitely need a fence around it and some cameras, because we don’t want any nefarious actors coming in and messing with us.”
“But I’m a big believer in — as close as possible — bring[ing] commercial things in in a safe way, and allow folks to run,” he added.
Over the last two years, Garciga said the Army has been getting away from building “bespoke” systems and instead tapping into the commercial industry for available platforms in an effort to “democratize” AI across the force.
The Army has been using several systems for defensive cyber operations, healthcare and — like the intel battalion — readiness management, for example. While some of these systems have been around for years, they may have only been used at the Corps-level and above two years ago, Garciga said.
In that time, the Army has fed more data into systems such as Maven and Vantage, a Palantir-based data analytics platform, and Garciga said usage has proliferated widely, down to the platoon level in some cases.
Exposure to the tech has driven much of that, he said, including outreach to individual commands and data personnel at lower levels in the Army. Davis, the warrant officer who built the 319th’s AI tools, for example, said he had never used those systems before last year.
Earlier this month, DefenseScoop reported that the Army had sidelined a soldier-made AI tool known as VECTOR, which touted the ability to pull historic promotion data to help troops prepare for evaluation boards. At the time, service officials said VECTOR was pulled for a “compliance review” and did not have access to the “historic, sensitive data” it claimed.
The tech prompted concern from experts over guardrails around the individual ownership aspect of AI the military has been pushing. Garciga said VECTOR was still an example of innovation the Army was trying to promote and compared it to the 82nd’s tools.
“VECTOR and the 82nd Airborne Division’s tool are both examples of how soldiers are leveraging enterprise platforms like Army Vantage and Maven Smart System to address operational and administrative challenges,” he later told DefenseScoop in a statement. “VECTOR was briefly taken offline for a compliance review to ensure it met all security and governance requirements – a standard practice for newly developed capabilities.”
“This review reflects the Army’s proactive approach to balancing innovation with accountability,” he added. “The 82nd’s tool, developed within the same secure framework, demonstrates how soldier ingenuity can thrive when supported by enterprise platforms designed to enable experimentation and operational success.”
During the interview, Garciga said that the biggest challenge for his office has been the size of the Army, specifically catching redundant efforts across the largest military branch with more than a million personnel. Another issue has revolved around personnel not understanding the technology and “maybe trying to do things that they shouldn’t have done,” he noted.
“The good thing is that we put so much guardrails in place technologically that we usually catch those, and then it becomes really a training issue more than anything else,” he said. “Some of these are just like hey, somebody tries to do maybe an intelligence-type function in a platform that’s not ready to do that and then we catch that, we go there.”
A week after the launch of GenAI.mil in December, the DOD’s hub for commercial large language models, some troops and officials told DefenseScoop they were wary of using the tech given the lack of training or guidance on its use.
Garciga said the Army has provided AI and automation training, including for senior executives and officers at the University of North Carolina, online modules, vendor-provided courses and — what he called “our biggest success … in this space by far” — a “lunch and learn” every other week where hundreds of personnel tune in to learn about the Army’s AI use.
He said as long as the Army can protect its data from adversarial countries or being leaked, it can create a “box” for its soldiers or civilians to experiment with AI.
“It’s safe to do, we have to trust them — and look, if we’re not unleashing people’s intellectual curiosity, then I don’t think we’re doing our job,” he said. “Sometimes we’ll have some missteps, we’ll make some mistakes, but I think that’s what we fight every day. The big thing is if you get the basics right and the foundation right, you can kind of open that playground … for folks to run.”