Army strategizes to promote ‘responsible AI’ adoption
U.S. Army officials are crafting and refining a broad but adaptable new plan to help ensure all artificial intelligence capabilities across the service are responsibly adopted and deployed now and in the future.
“We’re developing what we’re calling the ‘Army’s Responsible AI Strategy,” Dr. David Barnes, the chief AI ethics officer at the Army AI Integration Center (AI2C), said at the Advantage Defense and Data Symposium.
“The strategy isn’t the end in itself. The idea is to set the conditions for the next generation of the Army’s AI strategy, to ensure that the principles are captured in this and into the work that we do — both on the ‘responsible build’ side, but also on [the] ‘responsible use’ side,” he explained.
Barnes, who also serves as the deputy head of English and Philosophy at West Point, is one of the military’s top experts on legal, ethical, and policy concerns relating to AI-enabled systems.
He has played a leading role in the Defense Department’s ‘responsible AI’ journey.
The Army first launched its own far-ranging AI adoption strategy in 2020. More recently though, Barnes noted, his team realized the need to more explicitly articulate an iterative approach for how the Army can lead in AI ethics and safety, while deliberately incorporating new practices encouraged by the Defense Department in newer guides and resources, which will result in the new responsible AI strategy.
“We have four major lines of effort. The first one, in no particular order, is on workforce development. Unsurprising, right? Building the expertise within the Army that has an understanding of responsible AI. Also, what’s AI for every soldier — and what does everyone in the Army, from the youngest private up to the secretary — need to know about artificial intelligence relative to her position,” Barnes said.
The second line of effort focuses on helping the Army participate in more productive collaboration across government, academia and industry.
“The third area is governance. Obviously, governance is a big concern. From our perspective, it’s about scaling up across the Army — it’s ideas like the potential of a Responsible AI Board, and where might that sit in the current Army leadership structure?” Barnes said.
He concluded that the last line of effort, which he pointed out is also a major focus area for his team, is to take federal principles and guidelines that already exist and produce innovative metrics to gauge Army AI use cases and develop better risk assessments.
“It probably won’t ever be published as its own strategy. But the idea is how do we pull all these different elements together, and present it back? Because, right now, the DOD takes a somewhat narrow focus on what responsible AI is. And it becomes an afterthought, like so many other things, and it’s not as interwoven,” Barnes told DefenseScoop on the sidelines of the symposium after his panel.
“We want the next generation and future versions” of the Army’s processes and policies “to just have [responsible AI] built in as part of it all,” he added.