Advertisement

Centcom aims to be Pentagon’s AI ‘integration testbed’

"We struggle sometimes to anticipate the way that real users interact with technologies,” U.S. Central Command CTO Schuyler Moore said.
A Feb. 6, 2017 photo shows the headquarters of the U.S. Central Command at MacDill Air Force Base in Tampa, Florida. (MANDEL NGAN/AFP via Getty Images)

U.S. Central Command’s leadership is moving deliberately to get end users of new artificial intelligence capabilities involved as early as possible in the development of those emerging tools, according to Centcom’s Chief Technology Officer Schuyler Moore.

“The earlier that you can get the actual human in close to a realistic environment, the better you will be equipped to speak with authenticity and in quantitative terms about how AI is being integrated in a responsible and ethical way,” she said Tuesday at Intel’s public sector summit.

Broadly, Central Command — which is responsible for U.S. military operations in the Middle East — is considered one of the Defense Department’s early AI adopters. The hub has deployed high-tech AI that enables computer vision, pattern detection, and decision support for intelligence, reconnaissance and other missions. 

The Pentagon has produced and released guidance and resources to inform what it has deemed “responsible AI” use across all components, with an eye toward deploying more of these types of tools in the future.

Advertisement

“We set a lot of expectations of — ‘for responsible and ethical AI, you’ll perform in XYZ ways.’ But the only way that you will be able to measure whether or not a model or capability performs in XYZ ways is if you put it in the hand of the person who is supposed to actually use it. We struggle sometimes to anticipate the way that real users interact with technologies,” Moore said.

Her team has learned in their efforts that the ways that military users interact with new capabilities in testing phases “matters significantly,” she added.

Sometimes in the software space, developers will try to mitigate risks identified in new products by taking away users’ access to solve the problems that have been identified.

But in Moore’s view, “assumed risk can actually be reduced by introducing the user in as realistic of an environment as possible into an earlier stage of development.” Not doing so, she said, could result in building advanced models that do not take into account how the user engages with the technology or the network capacity and other data they have available. 

“So, you actually introduce more risk by removing the user and saying, ‘We’re going to fix everything before we get it to you guys.’ And so, as a combatant command sometimes we try and make sure that we’re making that point clear that this older model perhaps for hardware, of being developed in a silo and it goes into these test labs before you throw over the wall to the user — it’s not necessarily appropriate for software, AI or otherwise. The earlier you can inject realism, both in terms of your user testing and in terms of the environment, will reduce your risk,” Moore said.

“And so we are trying to beat the drum on that. As a combatant command, we are raising our hand to say, ‘We will be the integration testbed for the department. You can push things out to us and we will have realistic data, we will have realistic users and realistic environments that will improve the quality and actually de-risk for the entire department what might otherwise remain more of a wild product until it actually gets transferred to users,’” she explained.

Advertisement

FedScoop reporter Madison Alder contributed to this article.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop's Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts