Most of the challenges hindering the Defense Department’s adoption of artificial intelligence and automation stem from the unique nature of how the department operates — as opposed to the current, emerging state of the technology — according to comprehensive new research from the Center for Strategic and International Studies.
The new study — “Six Questions Every DOD AI and Autonomy Program Manager Needs to Be Prepared to Answer” — takes aim at internal bureaucratic DOD functions, like the authority to operate (ATO) process, that stifle rapid AI innovation in the U.S. military and prompts a series of questions anyone involved in adopting the emerging technology for defense should ask themselves.
Released Tuesday and authored by Greg Allen — a former AI policy lead for DOD who is now director of CSIS’s brand new Wadhwani Center for AI and Advanced Technologies — the study marks the first in a series of two papers Allen and CSIS plan to publish on the Pentagon’s complex, evolving and sometimes secretive efforts to deploy AI.
“I’ve been working on this since December. But, I mean, it’s informed by my three years at the Department of Defense,” Allen told DefenseScoop in an interview on Tuesday.
Allen served in multiple positions in the Pentagon between 2019 and when he departed in April 2022. “When I left, I was the Director of Strategy and Policy at the JAIC,” or Joint Artificial Intelligence Center, he noted. That center was one of four DOD components that were realigned and rebranded into the Chief Digital and AI Office, which was fully operating by late 2022.
“Honestly, my real mental model for a reader was somebody having their first day at the JAIC. What would be every topic that you wanted to prepare that person to wrestle with, as they embark upon some kind of AI capability development or other effort in the DOD? And now, as the military services, and the combatant commands, and just the whole DOD enterprise is becoming fixated upon AI modernization, this is really designed to be a document that is useful to them and doing their jobs,” Allen explained.
Via his months-long study, Allen ultimately identified six questions DOD AI officials should brace to answer:
- Mission — What problem are you trying to solve, and why is AI the right solution?
- Data — How are you going to get enough of the right kind of data to develop and operate your AI system?
- Computing Infrastructure and Network Access — How will you get the AI system approved to reside on and interact with all of the DOD networks required for its development and use?
- Technical Talent — How are you going to attract enough of the right kind of AI talent and put that talent to good use?
- End-User Feedback — How are you going to ensure that the operational user community is a frequent source of feedback and insight during the development process?
- Budgeting and Transition — Are the diverse DOD development, procurement, and operational organizations involved with your AI capability adequately budgeting for their involvement in the program?
To inform his 33-page paper and these overarching questions, Allen interviewed dozens of experts involved in driving AI and autonomous technologies and policies, including current and former members of the DOD, allied nations, and officials in the private sector. Among those many discussions were a number with “technology companies and individuals who have been supporting Ukraine” in the unfolding conflict since Russia’s invasion, he confirmed.
“I would say the first thing that really surprised me was just how rapidly the Ukrainian military has been able to apply AI — I mean there are cases where an AI model went from an idea in someone’s head to an AI-enabled military capability that warfighters are using and loving in a real war in a matter of weeks. From the perspective of the DOD, I knew that it’s really hard to do something that fast. And if the Ukrainians are doing it, that means that so much of what makes it hard to move that fast is not about AI. It’s about the DOD,” Allen explained.
Parts of his paper present and interrogate cases where Ukraine’s military has effectively unleashed advanced AI capabilities in combat incredibly rapidly compared to the Pentagon’s pace.
“So, I think one of the answers to the question of, ‘Why is it possible for Ukraine to move so much faster [than DOD]?,’ is because they don’t face these [Authority to Operate, or ATO] regulations,” Allen said.
Each and every software system that processes data and functions on the Department of Defense Information Network (DODIN) is required to first obtain an official ATO from a certified government authorizing official. Allen calls ATOs “a major barrier to accelerating AI adoption” within the department in his paper.
“‘These challenges with ATOs are eating us alive,’” one senior DOD official told him in his research.
The regulations were created to address cybersecurity concerns around the department’s software applications.
“It’s not that the ATO process was designed for no good reason. It’s just that it has all of these unintended consequences of slowing down DOD AI transformation and really just software development in general,” Allen told DefenseScoop.
“How so many of the successes of DOD AI look differently when you view them through the lens of ATO challenges, right? A lot of the success of Task Force 59, for example, is being undertaken under a data-as-a-service model, or a contractor-owned contractor-operated model,” he continued.
One of the benefits of that approach is that all the data that the system generates is unclassified and can therefore live on commercial networks — and move at the speed of commercial development. A primary challenge, however, is that all of that development is taking place without access to classified data sources, or any kind of consideration of what it would mean to integrate these systems into classified networks that have certain unique features.
“So what that means — and I don’t mean to diminish Task Force 59’s achievements at all because they are really significant, as I sort of detail in the paper — but it does illustrate that they will hit sort of a plateau in terms of, or a ceiling rather, in terms of what types of use cases they can go after, and how impactful their systems will be — because it’s going to be really tough to hook all of that good stuff up to DOD networks,” Allen said.
Among the multiple other challenges he highlights, Allen pointed to some around end-user feedback in AI-enabled combat. He cites an example in his paper associated with “a really intimate relationship between the warfighter and AI developer” defending Ukraine in the unfolding war.
“Iterative development with the end-user is this sort of critical aspect of success — really in all software development, but especially in anything related to AI — and my point is that the DOD macro-organizational structure, requirements, process, budget process, all of it just makes that so much harder to do,” Allen said.
Over the next few months, Allen is preparing his next paper in this CSIS series. For it, he plans to review and outline lessons learned from DOD AI and autonomy efforts over the past six years — and develop tips for policymakers and department leaders regarding mitigating some of the barriers laid out in this initial study.
He aims to explore high-profile pursuits like Project Maven and organizations like the CDAO to ultimately determine some operational and organizational constructs that could be introduced to “break this cycle.”
“Even the most successful AI initiatives at DOD have opportunities for improvement — and so it’s really trying to survey the entire landscape and make recommendations in that regard,” Allen said.