Advertisement

New interim DOD guidance ‘delves into the risks’ of generative AI

Further detailed guidance is forthcoming from DOD’s Task Force Lima, a spokesperson confirmed.
(Getty Images)

Pentagon leadership recently issued new interim guidance to advise all U.S. defense and military components’ ongoing and forthcoming adoption of emerging and disruptive generative artificial intelligence (Gen AI) technologies, sources told DefenseScoop.

In August, senior defense officials launched Task Force Lima within the Chief Digital and AI Office’s (CDAO) Algorithmic Warfare Directorate to quickly explore the potential ramifications and promising use cases for safely integrating generative AI into Defense Department activities. 

Broadly, these still-maturing capabilities and associated large language models (LLMs) generate (convincing but sometimes inaccurate) software code, images and other media from human prompts. These systems get more “intelligent” each time they’re trained with more data — and, with so many uncertainties around how they function or may evolve, they pose both unrealized opportunities and enormous risks for the U.S. military.

Among its many responsibilities, Task Force Lima is charged with informing the CDAO’s path to governance and policy-making around the department’s generative AI pursuits. This new interim guidance marks the non-permanent task force’s first output in an iterative process to fulfill that task. 

Advertisement

Although the standards are for DOD internal use only and will not be disseminated publilcly due to certain sensitivities, a CDAO spokesperson briefed DefenseScoop this week on some of the document’s key elements upon its first release. 

“As Gen AI becomes more accessible, the department emphasizes its potential to advance the DOD mission, highlighting its capability to enhance efficiency and productivity. However, the guidance also underscores the associated risks and the role of users, senior leaders, and commanders for responsible use,” the official explained via email. 

Under explicit direction from CDAO chief Craig Martell, the interim guidance adheres to DOD’s AI Ethical Principles and previous memorandums distributed to personnel. 

“This guidance does not override existing legal, cybersecurity, or operational policies but reinforces DOD personnel’s accountability in the acquisition and responsible use of Gen AI,” the spokesperson confirmed.

The CDAO official further spotlighted four notable points within the guidance. They are, in the spokesperson’s words:

Advertisement
  1. Risk Assessment & Mitigation: Rather than enforcing outright bans on Gen AI tools, the DOD urges its components to adopt robust governance processes. This includes documenting the risks associated with specific Gen AI use cases, deciding and justifying the acceptable risks, and planning to mitigate unacceptable risks. 
  2. Input Restrictions: Publicly available Gen AI tools should be approached with caution. Entering Classified National Security Information or Controlled Unclassified Information, such as personal or health data, is prohibited. All data, code, text, or media must be approved for public release before being used as input. 
  3. Accountability: All DOD personnel are accountable for outcomes and decisions made with Gen AI’s assistance. Users are advised to verify and cross-check all outputs from such tools. 
  4. Citation: For transparency, appropriate labeling is encouraged for documents created with the aid of Gen AI tools. 

The document also “delves into the risks” associated with deploying large language models and associated technologies, the spokesperson noted.  

“These models, based on enormous amounts of data, might perpetuate inaccuracies or biases inherent in the inputs. There is also the potential risk of copyright violations, inadvertent disclosure of government information, and heightened cybersecurity vulnerabilities,” they said. 

During a recent interview with DefenseScoop, Task Force Lima Mission Commander Navy Capt. M. Xavier Lugo reflected on some of those existing and unknown risks and security concerns that accompany the current state of AI — and he emphasized how DOD insiders must be particularly cautious and prudent regarding what kind of information is submitted to prompt publicly accessible models like ChatGPT and its competitors. 

“We — as in the world — are still trying to learn how vulnerable these models are for reverse engineering, right? And there’s strengths and weaknesses based on prompt engineering, on how models can be interrogated to acquire information that’s either in the model from their foundational data, or in the model from actual other prompts that have been fed into that,” he said.

Advertisement

“Now with that said, we — as in the task force — still see utility with publicly accessible models in the generation of like, for example, the first drafts of documents or other types of situations that are not specific to operational pieces of the DOD. Also, there are experiments happening with some of these models that are publicly accessible in order to learn more about them — so, that’s why we don’t have to necessarily restrict the the access to them. What we do is reiterate the OpSec pieces and the CUI [controlled unclassified information] pieces of law that we already have in policy on how we share information in the open,” Lugo explained.

The military services are encouraged to publish their own more restrictive guidance that goes beyond the interim document.

“Further detailed guidance is forthcoming from Task Force Lima,” the CDAO spokesperson told DefenseScoop.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop’s Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts