IC preparing its own tailor-made artificial intelligence policy 

The effort by the Office of the Director of National Intelligence is building on its recently implemented AI ethics framework and other existing standards.
The Liberty Crossing Building, home to the Office of the Director of National Intelligence, in McLean, Virginia, April 24, 2015. (JIM WATSON/AFP via Getty Images)

Experts in the Office of the Director of National Intelligence are producing a sweeping new AI-governing policy that’s deliberately bespoke for all members of the intelligence community.

“The intent is always to make sure that what we do is transparent,” Michaela Mesquite, the acting chief for ODNI’s nascent Augmenting Intelligence using Machines (AIM) group, told DefenseScoop. 

She said the organization essentially leads and handles oversight for all AI capabilities across the entire enterprise. A longtime federal official and analyst, Mesquite played a leading role in ODNI’s recent development of its AI ethics framework.

During a panel discussion about operationalizing AI ethics in the military and intelligence domains at the Advantage Defense and Data Symposium on Thursday, she hinted at some of the next steps her team is pursuing when it comes to ensuring the responsible use of machine learning and other emerging technologies across the IC.


“AIM is focused very much on this governance piece. Knowing that we’ve had the ethical principles and an AI ethics framework for a while — there’s a lot more policy coming, and a new strategy coming, and governance structures to be stood up. So, we are busy,” Mesquite said.  

Although she didn’t go into much detail about those unfolding standards and policy-making endeavors during the panel, Mesquite did note that the overarching intention is to guarantee everyone in the IC — not just technology developers, but acquisition experts and all other end users — has a strong grasp on what AI capabilities are appropriate and useful or inappropriate for their jobs. 

“How do we make sure we are looking at the breadth of the policy to make sure our entire organization is mature enough to think about, fully, what is an effective use and appropriate use, and therefore already embedded is the ethical use — because if it’s effective, if it’s appropriate, it’s already going to be ethical. So how do we do that? We get to make our own IC AI policy for this,” she said.

In a sideline conversation after her panel, Mesquite briefed DefenseScoop regarding the ongoing process and what this work really looks like on the ground.

“[ODNI has] a policy team. It’s their job to write these policies. And because this is such a touchy [topic] — these are big deals, right — they do it in their own sort of ‘black box’ and there are process protections around it. So, we [the AIM group] bring the expertise and ideas to inform them,” she explained.


“There are a very limited number of policy instruments. Of those policy instruments, there’s directives, guidance, standards and memorandums. So, first, we have to have the directive part — then everything hangs off of that,” Mesquite told DefenseScoop.

She declined to provide a time frame for when an initial AI policy directive for the intelligence community will be completed.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop's Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts