Advertisement

Meet VECTOR: An unofficial soldier-made AI tool the Army suspended pending a ‘compliance review’

VECTOR was created on Army Vantage, a Palantir-made platform intended to improve decision-making by meshing data repositories with machine learning.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
A member of the Rhode Island Army National Guard reviews a Soldier’s enlisted record brief (ERB) during a centralized promotion board, Feb. 12, 2025, Camp Fogarty, Rhode Island. Conducted without in-person candidates, the board relies heavily on accurate and well-written NCOERs (noncommissioned officer evaluation report) to fairly evaluate and score each promotion file. (U.S. Army National Guard photo by Sgt. 1st Class Terry Rajsombath)

The message started circulating in some Army circles last week. It promoted an artificial intelligence tool, one that would “revolutionize” how soldiers approach talent management. 

Dubbed VECTOR, the AI application was hosted on an official Army data analytics platform. The message said it could help soldiers write performance evaluations and prepare for promotion boards by tapping into historical board data, critical assessments that determine whether a soldier moves on to the next rank.

Within days however, VECTOR was gone — at least for now.

The program was not officially sanctioned by the Army, but created by an individual non-commissioned officer, a service spokesperson confirmed to DefenseScoop. The service suspended VECTOR while it conducts a “compliance review” of the application, Army spokesperson Cynthia Smith said. 

Advertisement

The soldier made the tool to give others a “better understanding of the personnel process,” she added, but it did not have access to “historic, sensitive data,” contradicting what the message claimed. When asked why the Army took it down, Smith reiterated it was suspended and under review. While she confirmed the existence of the program, she did not speak to the validity of the message’s description of VECTOR or how far it spread across the force.

VECTOR’s so-far brief existence highlights the stakes around where the military’s rapid AI adoption and policy guardrails should intersect, a military innovation analyst told DefenseScoop. While military leaders and tech executives have hyped generative AI as the new battlefield frontier, experts have warned of dire and uncertain societal consequences over privacy, institutional trust, safety and ethics.

The Pentagon wants troops to use large language models in their every-day operations as part of an “acceleration strategy” that — in part — “will unleash experimentation,” Defense Secretary Pete Hegseth has said

There also needs to be policy to meet that experimentation so unauthorized AI tools, potentially of high-consequence, aren’t unintentionally introduced to the force, said Carlton Haelig, a fellow at the Center for a New American Security’s Defense Program. 

“There is a top-level — on both the civilian and military leadership side of the department — push for adopting AI in every area that can possibly be applied. And that’s good if they’re doing it in terms of pursuing efficiencies, pursuing capabilities,” he said. “Where I see tension with that is that the rush for speed and tempo in adopting, adapting and developing new AI tools and other types of AI-driven tools is that you don’t get the procedures, the regulations, the review of those tools that you would otherwise get.”

Advertisement

VECTOR was created on Army Vantage, a Palantir-made platform intended to improve decision-making by meshing data repositories with machine learning. For basic access, military personnel and department civilians can access Vantage with a Common Access Card and cyber awareness training certificate to create their own applications.

VECTOR was advertised to help troops draft officer and non-commissioned officer evaluation reports. These assessments, submitted by a soldier’s superiors, are key to helping determine a soldier’s performance and their promotion eligibility.

The relatively “low consequence” VECTOR situation signaled “a healthy ecosystem right now,” Haelig said, given that it was geared toward an administrative task. But the scenario invoked a more crucial tension: preventing the use of unauthorized AI tools “where there are higher stakes at play,” like combat, while simultaneously ensuring troops with innovative ideas aren’t discouraged from experimenting with the tech.

“They’re really trying to push individual ownership of the adoption of AI throughout the force, which leads to situations where service members come up with really ingenious and novel applications of AI that are perhaps truly beneficial to whatever activity they’re engaging in,” he said. “But other times, because of the lack of clarity on the regulations and the processes for developing those tools, evaluating them and implementing them, you get lapses wherein these things might get rushed out in a situation like this.”

A week after the breakneck release of GenAI.mil in December, DefenseScoop reported that the Pentagon’s new hub for commercial AI platforms was being met with mixed reviews, in part because there were no clear guidelines governing its use. The platform started with Google Gemini. Open AI, Anthropic and xAI (the developer of Grok, which drew public outrage after it allowed users to create non-consensual sexual deepfakes) models are expected to join GenAI.mil. Last month, the Defense Department released its AI strategy, which reiterated a fast and aggressive approach to adopting the tech. 

Advertisement

The VECTOR message, which was also posted on social media, said the application could create a profile for a soldier based on rank, military occupational specialty, position and “board data,” which was not specified. It was “trained” on various talent management regulations, according to the message, such as OER and NCOER report writing policies, as well as professional development guidelines. 

By analyzing historic promotion board data and scoring rubrics, the message said VECTOR could give soldiers an idea of how they stacked against their peers for these panels, which are composed of senior Army leaders to determine whether a service member moves on to the next rank. 

It touted regulation compliance, personally identifiable information protection, session isolation (meaning data can’t be shared with other users), and anonymized, aggregated statistics for board analysis.

Smith, the Army spokesperson, denied that VECTOR had access to historical or sensitive information. 

“If it did not have access to the historical data, as the Army is saying, [then] this probably really was truly a developmental effort, an experimentation … that was not ready for prime time,” Haelig said. “If it did in fact have access to the historical data, that suggests all sorts of problems [like] how did they gain access to that data in order to use it for this tool? If it wasn’t an approved program, that’s the first question that I would ask.”

Advertisement

While VECTOR was suspended, its concept is not so far-fetched. An Army first sergeant who evaluates several non-commissioned officers told DefenseScoop that it “would be naive to think that NCOs and officers are not already using AI to help write their evaluations.”

Justin Lynch, a nonresident fellow with the Atlantic Council, said such a program, if it works well, could save time and might make a difference for raters who may not be skilled writers.

“There’s lots of things that can go wrong, so having a tool that assists writers and levels the playing field a little bit can actually reduce bias in the evaluation process,” he said. But a potential model also requires validation and verification testing to make sure it is meeting the Army’s intent and meeting it securely.

“The security is at least twofold,” he told DefenseScoop. “First being that it doesn’t build any sort of backdoor in other Army systems that have the possibility of creating some sort of cyber compromise, and it’s not revealing any sort of [personally identifiable information] from whatever database it used to build its ability to help write evaluations or to make predictions about the results of an evaluation.”

“You want to have enough policies and guardrails in place to make sure things are being done safely, but no more,” he added, noting that too many restrictions or lengthy development timelines could deter people from making their own AI products.

Talent management is one of the Army’s most imperative missions in that it aims to assess and retain quality soldiers in the force. While that effort is data heavy, it is meant to be rooted in human-to-human development.

Advertisement

“We still have to develop our subordinates, and we still have to mentor our NCOs, we still have to be leaders,” the first sergeant said. “This can be a tool, but we can’t allow this tool to replace our responsibility to develop leaders.” 

They said the Army openly publishes promotion board guidance, giving soldiers the “answers to the test,” but an AI-driven tool could help troops cross-reference those documents if used responsibly.

“With any process that is going to incorporate AI, you still have to have a human in the loop,” the first sergeant said. “You still have to check it and make sure and validate it, and that’s why I think that leaders still have a responsibility to develop individuals. We can’t solely outsource developing the next generation of leaders to AI.”

Drew F. Lawrence

Written by Drew F. Lawrence

Drew F. Lawrence is a Reporter at DefenseScoop, where he covers defense technology, systems, policy and personnel. A graduate of the George Washington University’s School of Media and Public Affairs, he has also been published in Military.com, CNN, The Washington Post, Task & Purpose and The War Horse. In 2022, he was named among the top ten military veteran journalists, and has earned awards in podcasting and national defense reporting. Originally from Massachusetts, he is a proud New England sports fan and an Army veteran.

Latest Podcasts