Advertisement

Inside the DOD’s trusted AI and autonomy tech review that brought together hundreds of experts

More than 200 attendees from government, industry and academia participated in a three-day conference hosted by the Office of the Under Secretary of Defense for Research and Engineering.
(Getty Images)

More than 200 attendees — representing the government, military, and approximately 60 companies, universities and federally funded research centers — participated in a three-day conference June 20-22 that the Office of the Under Secretary of Defense for Research and Engineering organized and hosted to deliberate on key advancements and issues in the fields of artificial intelligence and autonomy within the U.S. defense sector. 

“The way that the days unfolded was that industry and academia heard our problems, about where we needed help and what technical gaps we needed closed, and then we heard from industry and academia about their research as applied to those gaps. Then, we took down actions on how to move forward to address those gaps,” a senior Defense Department official who helped lead the conference told DefenseScoop on the condition of anonymity this week. 

Broadly, AI-enabled and autonomous platforms and computer software can function, within constraints, to complete actions or solve problems that typically require human intelligence — with little to no supervision from people. Certain Defense Department components have been developing and deploying AI for years, but at this point such assets have not been scaled enterprise- or military-wide and some associated guidance is still lacking

And as they speedily grow in sophistication, emerging and rapidly evolving large language models and related generative AI capabilities are also posing new potential for help and harm across Pentagon components. 

Advertisement

This week, DefenseScoop obtained an official summary of DOD’s recent conference to directly address some of those threats and possibilities with experts across sectors — dubbed the Trusted AI and Autonomy (TAIA) Defense Technology Review (DTR). The document was written by R&E leadership but hasn’t been publicly released.

The event provided a platform for the government to communicate its objectives and challenges in the realm of AI and autonomy, and “experts to engage in in-depth discussions on specific areas of concern and aspiration within” those emerging technology realms, it states.

Among the prominent industry organizations present were Amazon, IBM, NVIDIA, Boeing, BAE, Boston Dynamics, Dynetics, Applied Intuition, Skydio and TwoSix.

On the first day of the event (which was held at a MITRE facility, with support from that firm’s CTO and team) DOD’s Chief Technology Officer and Undersecretary for R&E Heidi Shyu delivered the keynote address spotlighting critical technologies and investment areas for trusted AI and autonomy, including a new initiative to stand up new strategic AI hubs.

Other notable speakers who gave presentations over the course of the three days included Lt. Gen. Dagvin Anderson (Joint Staff, J7), Chief Digital and AI Office CTO William Streilein, DARPA’s John Kamp and representatives from Indo-Pacific Command, Central Command, European Command and the military services.

Advertisement

After Shyu’s keynote, DOD’s Principal Director for Trusted AI and Autonomy Kimberly Sablon presented her team’s strategic vision, “with a focus on cognitive autonomy development within a system of systems framework,” according to the summary. Sablon stressed the significance of continuous adversarial testing and red-teaming to ensure a resilient operations or machine learning operations (MLOps) pipeline. She also announced two fresh AI initiatives.

One of those initiatives encompasses a new “community of action that integrates mission engineering, systems engineering and research via integrated product teams and with emphasis on rapid experimentation with mission partners to address interoperability earlier,” officials wrote in the summary.

The other is a pilot Center for Calibrated Trust Measurement and Evaluation (CaTE) that will bring the test, evaluation, verification and validation, acquisition, and research-and-development  communities together “to develop standard methods and processes for providing evidence for assurance and for calibrating trust in heterogenous and distributed human-machine teams,” the summary explains. Led by Carnegie Mellon University’s Software Engineering Institute in collaboration with the services and other FFRDCs, that pilot center will pursue operationalizing responsible AI, taking a warfighter-in-the-loop design, development and training approach.

On the second and third days of the conference, attendees engaged in different breakout sessions designed to focus on specific tracks that encompassed a wide range of critical AI and autonomy topics for DOD.

Those tracks included: large language models; multi-agent autonomous teaming; deception in AI; advanced AI processing; human-machine teaming; AI-enabled military course of action generation; the R&E AI hubs initiative; intelligent edge retraining; responsible AI and lethal autonomy; MLOps/development platforms; synthetic data for emerging threats; and calibrated trust in autonomous systems.

Advertisement

“The conference facilitated the exchange of knowledge and ideas, providing valuable input to shape the direction of government research in critical areas of AI and autonomy. Furthermore, it laid the groundwork for focused workshops on AI Hubs, sparked interest in the future R&E Community of Action and CaTE, and paved the way for a much larger follow-on event to be scheduled for January 2024,” officials wrote in the summary. 

To the senior defense official who briefed DefenseScoop on what unfolded, this event demonstrates one way in which DOD is working deliberately to “address safety concerns” to develop and deploy capabilities that have appropriate guardrails associated with “constitutional AI.”

“Constitutional AI is a new approach to AI safety that shapes the outputs of AI systems according to a set of principles,” the official said.

Via this approach, an artificial intelligence system has a set of principles, or “constitution” against which it can evaluate its own outputs.

“CAI enables AI systems to generate useful responses while also minimizing harm. This is important because existing techniques for training models to mirror human preferences face trade-offs between harmlessness and helpfulness,” the senior defense official said.

Advertisement

“The department is cognizant of what the state of the art is and recognizes that we want to safely deploy it,” they told DefenseScoop.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop's Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts