Army strategizes to promote ‘responsible AI’ adoption

DefenseScoop has new details on an in-development strategy that's being crafted to iteratively guide the service's AI pursuits.
Source: Getty Images

U.S. Army officials are crafting and refining a broad but adaptable new plan to help ensure all artificial intelligence capabilities across the service are responsibly adopted and deployed now and in the future.

“We’re developing what we’re calling the ‘Army’s Responsible AI Strategy,” Dr. David Barnes, the chief AI ethics officer at the Army AI Integration Center (AI2C), said at the Advantage Defense and Data Symposium.

“The strategy isn’t the end in itself. The idea is to set the conditions for the next generation of the Army’s AI strategy, to ensure that the principles are captured in this and into the work that we do — both on the ‘responsible build’ side, but also on [the] ‘responsible use’ side,” he explained. 

Barnes, who also serves as the deputy head of English and Philosophy at West Point, is one of the military’s top experts on legal, ethical, and policy concerns relating to AI-enabled systems. 


He has played a leading role in the Defense Department’s ‘responsible AI’ journey

The Army first launched its own far-ranging AI adoption strategy in 2020. More recently though, Barnes noted, his team realized the need to more explicitly articulate an iterative approach for how the Army can lead in AI ethics and safety, while deliberately incorporating new practices encouraged by the Defense Department in newer guides and resources, which will result in the new responsible AI strategy.

“We have four major lines of effort. The first one, in no particular order, is on workforce development. Unsurprising, right? Building the expertise within the Army that has an understanding of responsible AI. Also, what’s AI for every soldier — and what does everyone in the Army, from the youngest private up to the secretary — need to know about artificial intelligence relative to her position,” Barnes said.

The second line of effort focuses on helping the Army participate in more productive collaboration across government, academia and industry.

“The third area is governance. Obviously, governance is a big concern. From our perspective, it’s about scaling up across the Army — it’s ideas like the potential of a Responsible AI Board, and where might that sit in the current Army leadership structure?” Barnes said.


He concluded that the last line of effort, which he pointed out is also a major focus area for his team, is to take federal principles and guidelines that already exist and produce innovative metrics to gauge Army AI use cases and develop better risk assessments.

“It probably won’t ever be published as its own strategy. But the idea is how do we pull all these different elements together, and present it back? Because, right now, the DOD takes a somewhat narrow focus on what responsible AI is. And it becomes an afterthought, like so many other things, and it’s not as interwoven,” Barnes told DefenseScoop on the sidelines of the symposium after his panel.

“We want the next generation and future versions” of the Army’s processes and policies “to just have [responsible AI] built in as part of it all,” he added.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop's Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts