Advertisement

DOD issues open call to industry seeking new tools for evaluating warfighter trust in AI-enabled systems

Officials recognize the need for troops to have confidence that the technology will work as advertised on the battlefield.
The Variable In-flight Simulator Aircraft (VISTA) flies in the skies over Edwards Air Force Base, California, shortly after receiving its new paint scheme in early 2019. The aircraft was redesignated from NF-16D to the X-62A, June 14, 2021. F-16 AI agents developed under DARPA’s Air Combat Evolution (ACE) program controlled the X-62A during test flights over Edwards AFB, California, in December 2022. (Air Force photo by Christian Turner)

The Pentagon’s Chief Digital and AI office is asking industry for solutions that would help the Department of Defense assess end users’ trust in artificial intelligence-enabled systems.

The open call, posted on the office’s Tradewind marketplace, comes as the U.S. military is making a push to develop and field “responsible AI” capabilities and autonomous platforms and weapons. Officials also recognize the need for warfighters to have confidence that the technology will work as advertised on the battlefield, or else they won’t be inclined to use it to its full potential.

The DOD “needs a comprehensive way to measure user trust in AI-enabled systems — including how trust may break down into various dimensions and its relationship to other concepts or constructs. For instance, dimensions of trust might include the extent to which an individual has confidence in the system’s accuracy vs. the extent to which an individual trusts the system because they hope that it will work; under vs. over-reliance on the system; etc. Related concepts or constructs include whether a user’s degree (and kind) of understanding of how the system functions contributes to their trust in the system; whether distrust is a separate construct from trust (vs. whether it is on the same scale); etc.,” the Tradewind announcement states.

“The assessment must enable a rich understanding (and prediction) of the descriptive psychological and behavioral states of users interacting with the system. The test must also provide sufficient data to support normative evaluations about the level of trust in the system, such as whether the trust in the system is justified given the evidence, or whether users over-trust the system in a way that impedes meaningful degrees of human judgment and control,” it notes.

Advertisement

The CDAO program aims to develop metrics and tests for assessing and evaluating user trust in AI-enabled systems; enable the continuous monitoring of operational users’ trust; and facilitate assessment of trust “among various dimensions and its relationship to relevant constructs and concepts.”

Interested organizations are invited to submit a two-page “discovery paper” outlining the value proposition, operational impact and end-user demand for their proposed solutions.

After papers are evaluated, vendors may be invited to a “digital proving ground” where they can pitch their innovations to contracting officers who will be looking to make rapid pilot project awards.

The DOD may decide to award other transaction agreements for prototypes to a single vendor or multiple vendors, with the potential for follow-on production awards, according to the announcement.

The end date for the open call is July 3.

Latest Podcasts