Pentagon developing repository to document when AI goes wrong

The development effort was noted in a new "Responsible AI Toolkit” that the Defense Department unveiled on its Tradewinds website.
(Getty Images)

The Department of Defense is in the process of creating a new “incident repository” that will catalog problems that Pentagon officials encounter with artificial intelligence.

The development effort was mentioned in a new “tools list” that was released by the department this week as part of a broader AI “toolkit” that it unveiled on its Tradewind website.

“The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves the alignment of AI projects toward RAI best practices and the DoD AI Ethical Principles while capitalizing on opportunities for innovation. The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools, and artifacts throughout the AI product lifecycle. Using the Toolkit enables people to incorporate traceability and assurance concepts throughout their development cycle,” according to an executive summary.

Among about 70 items on the list is an AI Incident Repository that is not yet accessible for department personnel because it’s still being developed. Once it’s up and running, it will feature a “collection of AI incidents and failures for review and to improve future development” of the technology, according to the Pentagon.


The new toolkit already includes links to an open-source database that is “dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes,” according to the website. Examples of such incidents include autonomous vehicles hitting pedestrians and faulty facial recognition systems, among others.

Other aids that are in the works as part of the Pentagon’s responsible AI implementation effort include an executive dashboard laying out project goals; incident response guidance including an interactive web application for end-user auditing; an acquisition guide for potential buyers of artificial intelligence products; and a senior leadership guide for reviewing program managers overseeing AI projects.

There’s also a “use case repository” in development that will include a rundown of artificial intelligence use cases, and a tool to help organizations define and establish roles and responsibilities for AI projects. The Pentagon recently established Task Force Lima to look at a slew of potential use cases for generative AI.

A “human bias red-teaming toolkit” and a bias bounty guidebook are also expected to be released.

In July, the Pentagon’s Chief Digital and AI Office (CDAO) issued a call for “discovery papers” in its search for vendors to set up a new bounty program.


“The DoD is interested in supporting grassroots/crowdsourced red-teaming efforts to ensure that their AI-enabled systems — and the contexts in which they run — are safe, secure, reliable, and equitable. Bias — the systematic errors that an AI system generates due to incorrect assumptions of various types, is a threat to achieving this outcome. Therefore, as part of this priority, the current call seeks industry partners to help develop and run an AI bias bounty program to algorithmically audit models, facilitate experimentation with addressing identified risks, and ensure the systems are equitable given their particular deployment context,” according to a notice posted on the CDAO’s Tradewind website.

The Pentagon expects vendors to have equipped the department and its components with the tools needed to organize and run their own bias bounty programs in the future, according to the notice.

Meanwhile, the CDAO this week also launched of a new digital on-demand learning platform to boost the AI know-how of the Defense Department’s military and civilian personnel by providing them access to MIT’s Horizon library, which will offer “bite-sized learning assets” related to artificial intelligence, the Internet of Things, 5G, edge computing, cybersecurity and big data analytics, according to a release.

The capability — which will be provided through the Air and Space Forces’ Digital University — is intended to “foster a baseline understanding of AI systems and other emerging technologies,” CDAO chief Craig Martell said in a statement. “This resource demonstrates to the DoD workforce how they fit into the future of these advancements and further enables their adoption throughout the Department.”

In a statement, Kathleen Kennedy, senior director of MIT Horizon and executive director of the MIT Center for Collective Intelligence, said: “The DoD is on a historical journey of building a digital workforce. When it comes to AI and emerging technologies, it is really important that their employees are all speaking the same language.”


To use the library, DOD personnel should create an account via the website using their .mil email address, and search for “MIT Horizon,” according to the release.

Jon Harper

Written by Jon Harper

Jon Harper is Managing Editor of DefenseScoop, the Scoop News Group’s online publication focused on the Pentagon and its pursuit of new capabilities. He leads an award-winning team of journalists in providing breaking news and in-depth analysis on military technology and the ways in which it is shaping how the Defense Department operates and modernizes. You can also follow him on X (the social media platform formerly known as Twitter) @Jon_Harper_

Latest Podcasts