2022 in review: What the Pentagon’s CDAO accomplished in its inaugural year

Four predecessor organizations in the DOD — the Joint Artificial Intelligence Center (JAIC), Defense Digital Service (DDS), Office of the Chief Data Officer, and the Advana program — were combined to form the CDAO, which was announced in late 2021 and reached full operating capability this year.
3D illustration Rendering of binary code pattern Abstract background.Futuristic Particles for business,Science and technology background
(Getty Images)

In 2022 — its first full year of existence — the Defense Department’s Chief Digital and Artificial Intelligence Office (CDAO) sorted out how the teams that comprise it will collectively operate, staffed up and established a “hierarchy of needs” to guide members’ work in the near term.

Four predecessor organizations in the DOD — the Joint Artificial Intelligence Center (JAIC), Defense Digital Service (DDS), Office of the Chief Data Officer, and the Advana program — were combined to form the CDAO, which was announced in late 2021 and reached full operating capability this year. 

“The [DOD] must become a digital and AI-enabled enterprise capable of operating at the speed and scale necessary to preserve military advantage. The [new CDAO and its leader] are charged with ensuring it does,” Deputy Defense Secretary Kathleen Hicks wrote in a memorandum unveiling the major reorganization and formation of the office, which functions under her purview.

Here’s a look at how the CDAO spent its “freshman” year creating a firm foundation to pave the way for achieving its goal of rapid — but responsible — AI adoption and scaling across the Pentagon’s sprawling enterprise.


Filling out the ranks

Following a long and deliberate hiring process, Craig Martell, a former machine learning professor at the Naval Postgraduate School — who more recently served in executive roles at Lyft, LinkedIn and elsewhere — was tapped to lead the CDAO in April.

“I’m bringing to the problem both an academic view of what AI is and where it’s useful, and an industry view of the right way to do the practice,” he recently explained at a NATO conference.

By June, the nascent office had hired nearly a dozen senior leaders to serve in its top positions, DefenseScoop confirmed

Some of its notable recruits included artificial intelligence ethics and research experts Diane Staheli, who was selected to lead the CDAO’s Responsible AI (RAI) Division and Jane Pinelis, who was named chief of AI assurance. Joe Larson — a Marine Corps intelligence reservist who co-founded the Pentagon’s premier AI endeavor, Project Maven — was also tapped to serve as deputy CDAO for algorithmic warfare.


The office reached initial operational capacity in February and officially transitioned to full operational capacity in the fall.

However, that progress did not come without some hiccups. For instance, the CDAO was recently approved to extend an existing contract to finish up policy development and data governance work that was not completed on schedule due to complications from the complex restructuring of DOD components that was required to form the new office.

“Every day we are maturing as an organization. The reorganization has been and is happening,” CDAO Spokesperson Kathleen Clark told DefenseScoop about the move.

Questions remain about the office’s role as a new portfolio owner for the evolving Project Maven work within DOD. In Maven’s next chapter, all lines of its effort related to geospatial intelligence will fall under the National Geospatial Intelligence Agency. Other elements of the project not associated with GEOINT pursuits will move under the purview of the CDAO.

Still, the office has essentially been solidified in legislation this year. And the fiscal 2023 National Defense Authorization Act includes multiple program provisions and congressional taskings that direct organizations across DOD to coordinate with and support the CDAO.


One of the CDAO’s notable early outputs this year was the new one-stop online “Tradewind Solution Marketplace.” The resource was built to help the Defense Department solicit, evaluate and curate technologies specifically associated with AI, machine learning, data and analytics — and also to accelerate the time it takes for DOD components to purchase those digital capabilities.

Top Pentagon officials also recently tasked the office with integrating all the data for its high-priority Joint All-Domain Command and Control (JADC2) initiative.

And the CDAO Governing Council — a 4-star level governance body run by the newly established office — this summer replaced the former AI Executive Steering Group, which was a 3-star level governance body led by the JAIC. That move marked what experts deemed a clear elevation in seniority of its primary mechanism for AI governance. It was included in the office’s comprehensive Responsible AI Strategy and Implementation Pathway — a high-level plan of action to ensure all the DOD’s AI use abides by U.S. ethical standards. That document was released in June.

Over the next year, the CDAO plans to “significantly expand and mature [its associated] offerings and guidance to the department” on RAI, according to Staheli. She said that work will culminate in a “minimally viable product” of the office’s “suite of RAI tools.”

Department of Defense Chief Digital and Artificial Intelligence Officer Dr. Craig Martell spoke at the Defense Intelligence Agency’s DoDIIS Worldwide Conference, Dec. 13, 2022, at the Henry B. Gonzales Convention Center in Texas. (DOD photo)

Building the ‘scaffolding’ 

On top of those efforts, in 2022 the CDAO spent a lot of time working to establish an overarching model to broadly guide its approach to priorities and operations moving forward.

“If we want to build a robust infrastructure that allows for AI to be done correctly, we have to think about it as a ‘hierarchy of needs,’” Martell said during his keynote address at a NATO conference in November, where he described that approach his team established this year. 

At the bottom of the CDAO’s hierarchy of needs are the “enablers” — or what Martell said make up a robust cloud environment that can move information to the tactical edge. That work is the chief information officer’s job, he added.

Next in the hierarchy is the quality data layer. To completely enable Pentagon artificial intelligence and machine learning capabilities across the department at the next level — and ensure seamless adoption in the not-so-distant future — Martell explained that the office is now working to clean and organize all the data that will underpin those AI and ML solutions. 


“Let me tell you, the [DOD] has greater-than-exabytes of data. We are not moving into a central location, right? That would be a very silly idea. But having a layer which tells you where data is, tells you about that data — and it tells you the semantics of that data and allows you to label it so that someone else can take the data and the labels and use it — that’s really what we need. That’s job zero,” he noted during the NATO event.

Above that in the hierarchy is an analytics layer that allows decision-makers to more clearly see what all that data says. And then beyond that, Martell said, is ensuring that a strong support structure props up all the capabilities in place. 

Before they were merged, some of the teams that make up the CDAO previously created a pipeline for Defense officials to build their own machine learning algorithms for wider use. Now though, personnel aim to enable a flexible infrastructure with robust “scaffolding” to ensure more models produced outside of DOD can be responsibly deployed by the department.

“I think most models that we’re going to use should be built by contractors, or experts, or academia, or our partners. We should allow for models to come from everywhere,” Martell said.

The Pentagon, he noted, is currently more interested in developing AI practitioners than just hiring a bunch of experts in the emerging technology field.


“I don’t know if we need a bunch of PhDs in machine learning to be able to do warfighting — I think we’d rather have warfighters who interact with a set of PhDs in machine learning, and that set of PhDs in machine learning builds those models for us. So, what we’re thinking about is ‘what’s the scaffolding around the model?’” Martell explained.  

Using a pop culture example to illustrate his point about the need for the right support structure for AI, Martell referenced the movie Coded Bias.

The story, he said, is about a Black woman studying for her undergraduate degree at MIT who “wanted to build a facial recognition system” that could act as “a mirror that would judge her mood, and then tell her things to make her feel better about her day.”

Martell noted that the algorithm “wouldn’t recognize her face” — but once the woman donned a lighter-toned “porcelain drama mask and put it over her face, the facial recognition system instantaneously” picked up on a face as a subject in front of it.

“Is that model [racially] biased? No, the model is not biased. It’s just counting on the past to predict the future. It’s the past that was biased. So, all of the data that was gathered to build that facial recognition model was gathered in the ‘60s, ‘70s and ‘80s. But what do you think the machine learning community looked like [then] in the United States? It was a bunch of people who look a lot like me — old white men, right?” Martell said. 


He explained that “it turns out that those older facial recognition models worked really great” for white men, but they were less accurate for women and people of color. 

“It’s, again, not the AI itself. It’s the data that trained the model that was biased,” Martell said.

Eventually, the woman that made the movie convinced major tech companies “to add more to their training data so that their model was no longer biased.” The companies obliged and “now the facial recognition models that they’re using aren’t [as] biased,” according to Martell.

“That’s part of the scaffolding around the model. It’s not just the math to build the model … but understanding whether the data is biased or not is extremely important if you want to do responsible AI. So, we’re thinking really hard about” that, he said.

“Part of that scaffolding around the model is training people to not just look at the things the system is saying, but making sure people are still looking for things the system missed,” he added. “And then, when it missed it — you have to be able to say ‘you missed this’ and have that feedback into the model to make the model better.”


The CDAO chief emphasized that a priority going into 2023 for his team is to continue to “think about that scaffolding.” 

“But, I’ll be honest with you,” Martell added, “most of our energy right now is on the quality data layer.” 

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop's Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts