Pentagon kicks off public bounty for biased chatbots

The Defense Department is exploring the risks of generative AI capabilities.
AI chatbot concept rendering (Getty Images)

The Department of Defense has started the first round of a crowdsourcing effort to find “bias” in large language models, as the Pentagon explores the upsides and risks of generative artificial intelligence capabilities.

The DOD has been using so-called bug bounty programs, such as Hack the Pentagon, to discover cyber vulnerabilities. Now, it aims to apply a similar concept to look for flaws in artificial intelligence tools.

The so-called AI bias bounty, which is being sponsored by the Chief Digital and AI Office (CDAO) and carried out in partnership with ConductorAI, Bugcrowd and BiasBounty.AI, will run from Jan. 29-Feb. 27, the Pentagon announced Monday.

Members of the public can register to participate and potentially earn financial rewards for their findings.


“The goal of the first bounty exercise is specifically to identify unknown areas of risk in Large Language Models (LLMs), beginning with open source chatbots, so this work can support the thoughtful mitigation and control of such risks. This exercise encourages public involvement (no coding experience necessary) to detect bias, and participants can earn monetary bounties based on scoring and evaluation by ConductorAI-Bugcrowd, funded by the DoD,” according to a release.

The total pot that will be distributed is $24,000, according to the event website, which says that the contest will run until March 11.

In the context of artificial intelligence, the Pentagon has defined bias as “the systematic errors that an AI system generates due to incorrect assumptions of various types.”

The bounty that was just launched is part of a broader Pentagon push focused on what the department calls “responsible AI,” including safeguards to mitigate the risk that algorithm-armed tools and weapons will go off the rails.

Last summer, the department issued a call for discovery papers as it looked for vendors to help establish and run the AI bias bounties.


The DOD plans to hold a second bounty soon, according to the release.

In a statement, Chief Digital and Artificial Intelligence Officer Craig Martell said the outcome of the bounties could “powerfully impact” the Pentagon’s AI policies and technology adoption.

The department has also stood up Task Force Lima to explore potential military use cases for generative AI.

Jon Harper

Written by Jon Harper

Jon Harper is Managing Editor of DefenseScoop, the Scoop News Group’s online publication focused on the Pentagon and its pursuit of new capabilities. He leads an award-winning team of journalists in providing breaking news and in-depth analysis on military technology and the ways in which it is shaping how the Defense Department operates and modernizes. You can also follow him on X (the social media platform formerly known as Twitter) @Jon_Harper_

Latest Podcasts