U.S. Central Command’s leadership is moving deliberately to get end users of new artificial intelligence capabilities involved as early as possible in the development of those emerging tools, according to Centcom’s Chief Technology Officer Schuyler Moore.
“The earlier that you can get the actual human in close to a realistic environment, the better you will be equipped to speak with authenticity and in quantitative terms about how AI is being integrated in a responsible and ethical way,” she said Tuesday at Intel’s public sector summit.
Broadly, Central Command — which is responsible for U.S. military operations in the Middle East — is considered one of the Defense Department’s early AI adopters. The hub has deployed high-tech AI that enables computer vision, pattern detection, and decision support for intelligence, reconnaissance and other missions.
The Pentagon has produced and released guidance and resources to inform what it has deemed “responsible AI” use across all components, with an eye toward deploying more of these types of tools in the future.
“We set a lot of expectations of — ‘for responsible and ethical AI, you’ll perform in XYZ ways.’ But the only way that you will be able to measure whether or not a model or capability performs in XYZ ways is if you put it in the hand of the person who is supposed to actually use it. We struggle sometimes to anticipate the way that real users interact with technologies,” Moore said.
Her team has learned in their efforts that the ways that military users interact with new capabilities in testing phases “matters significantly,” she added.
Sometimes in the software space, developers will try to mitigate risks identified in new products by taking away users’ access to solve the problems that have been identified.
But in Moore’s view, “assumed risk can actually be reduced by introducing the user in as realistic of an environment as possible into an earlier stage of development.” Not doing so, she said, could result in building advanced models that do not take into account how the user engages with the technology or the network capacity and other data they have available.
“So, you actually introduce more risk by removing the user and saying, ‘We’re going to fix everything before we get it to you guys.’ And so, as a combatant command sometimes we try and make sure that we’re making that point clear that this older model perhaps for hardware, of being developed in a silo and it goes into these test labs before you throw over the wall to the user — it’s not necessarily appropriate for software, AI or otherwise. The earlier you can inject realism, both in terms of your user testing and in terms of the environment, will reduce your risk,” Moore said.
“And so we are trying to beat the drum on that. As a combatant command, we are raising our hand to say, ‘We will be the integration testbed for the department. You can push things out to us and we will have realistic data, we will have realistic users and realistic environments that will improve the quality and actually de-risk for the entire department what might otherwise remain more of a wild product until it actually gets transferred to users,’” she explained.
FedScoop reporter Madison Alder contributed to this article.