Advertisement

Why Centcom wants ‘self-service’ computer vision for warfighters

DefenseScoop was exclusively briefed on the command’s new Desert Sentry commercial solutions opening.
4k aerial view of self driving autopilot cars driving on a highway with technology tracking them. (Getty Images/FlashMovie)

U.S. Central Command is moving to explore and quickly adopt intuitive, user-driven commercial platforms that can enable military analysts and operators with limited technical expertise to rapidly create and apply advanced computer vision capabilities for real-time, current operations.

According to two senior officials, the ultimate aim of Centcom’s new commercial solutions opening (CSO) released in collaboration with the Pentagon’s Chief Data and Artificial Intelligence Office is to pave the way for self-service platforms that allow warfighters to produce custom, performant CV models in seven days or less.

“That’s the unique thing here — we’re trying to enable our broader workforce to be able to self-serve their urgent model gaps and build things quickly, so that we can respond to those emergent threats concerns,” Centcom’s Chief Data Officer Michael Foster told DefenseScoop.

In a joint interview alongside his colleague Chief Technology Officer Schuyler Moore, the two senior officials briefed DefenseScoop on their intent behind this new CSO, how it’ll unfold and how it might reach Defense Department components beyond just Central Command.

Advertisement

“This is not just Centcom working in a vacuum. We’ve got a really great set of partners coming in with us,” Moore said.

A ‘gap filler’

“In general, when we have thematic areas of investment for digital modernization activities here at Centcom, we usually will give them a name. Desert Sentry is the name that we have given to this particular thrust around computer vision,” Foster told DefenseScoop.

In the official CSO, which is open through the month of June, Centcom invites solution providers to submit a three-page “Discovery Paper” pitching their capabilities, via the CDAO-aligned TradewindAI platform. 

A formal pitch round will follow in the Aug. 5-16 time frame. Then, based on results stemming from there, Centcom might make none, one, or multiple pilot project awards, likely in the form of other transaction agreements.

Advertisement

“I’m trying to be transparent in that it’s not a foregone conclusion that one or more capabilities will emerge from this activity with follow-on funding — but there is a path for that. Because, if we find something is actually fully responsive to the minimum feature set that’s described in the CSO, that is something that we would deem as potentially being operationally relevant as soon as possible,” Foster explained.

He and Moore declined to provide any details regarding how much funding could be allocated for this work.

In drafting the CSO, Foster said his team was “very deliberate in describing mandatory features and desirable features.”

“The mandatory features — think of those in aggregate — like, all of these features must be met in order to be a viable solution. And often at least one of the perceptions on getting is that initial interest from industry is usually around groups that might provide one or two of those functions, but not necessarily an integrated capability,” Foster said.

“But let’s think about the end game here. The end game is to enable a small group of users — operators and analysts — to self-serve the entirety of the process. That doesn’t work if it gets broken across a segmented group of tools. So, we’re trying to get an integrated, streamlined interface that addresses the totality of mandatory features,” he explained.

Advertisement

Prior to joining Centcom in December, Foster served in leading data and AI-enabling roles at the Air Force and National Reconnaissance Office, and more recently in executive positions at Maxar and CrowdAI.

“I think it is worth pointing out, [Foster] has worked with the organization that builds satellites within NRO. He’s been with the organization that processes data that comes off satellites with NGA. [Foster] has been with industry, building the tools to process that imagery and do more in addition to the other imagery that comes from commercial [assets]. So, he was built for this and has experienced every angle of this that could possibly exist,” Moore noted.

Drawing from his time in the private sector, Foster said he’s recognized that there’s much more that the military can do to leverage commercial capabilities. 

“It really falls in line with the idea of ‘buy what we can, build what we must,” he told DefenseScoop.

Computer vision platforms that allow users to drive a model development process in a very rapid fashion are technical capabilities that Foster said are already “emergent in industry, if not mature.”

Advertisement

“And then, if you compare that to some of the pain points that we’re feeling right now in terms of seeing problems and specific AI use cases where we need to adapt our models quickly — and specifically, our users have that intuition in terms of how are their problem sets adapting and evolving, which which makes them very well-suited to drive that model development process,” the CDO explained.

“So it’s really, can we put our users at the center of a model build process in a way that makes them more responsive to mission demands? And that’s a bit different from the traditional approaches to AI, which have largely been more of an asynchronous process by which demand signals are gathered, you take that out to machine learning developers to be responsive to try to best address it — but then … you stand by and wait for something to come back. And that iteration is not necessarily as fast as what the mission actually requires,” Foster said.

DefenseScoop asked him and Moore to point to a real-world illustration that helps visualize this need.

“There’s an image that’s been in the media from Ukraine in recent months, where it shows Russians were putting car tires across the wings of one of their strategic bombers to try to presumably defeat things that were trying to target those bombers. It was a defensive countermeasure. But the presumed point of it was, it was defeating AI that was looking for things that look like bombers — but if you put car tires across the entire broad cross-section of the wings, it can defeat AI,” Foster said.

Moore added: “I think that that is the best and most tangible example. It also implies the speed at which you have to iterate, because the time that it takes to throw tires on top of things and how long it takes to train a model, it really drives that time urgency.”

Advertisement

Putting the need another way, she offered a hypothetical associated with modern Centcom missions. 

“If you have an adversary that’s changing cars every day, your model may be really good at looking for a red Toyota,” Moore said. “But now you need to look for a white van and you need to look for a black pickup truck,” yet the model might not be well-trained for that.

With this CSO opportunity, Centcom officials are placing what Moore called “a ruthless focus on utility.”

“We don’t need all the bells and whistles. We don’t need fanciness in the PowerPoints and everything else. We need tools that anybody from our intelligence shops and from our other shops can pick up and use today. That’s one metric,” she told DefenseScoop.

Foster chimed in: “Perhaps it goes without saying, but I’ll offer that we have mechanisms to optimize AI performance. This is meant to be a gap filler for those things which are truly urgent, and where we have to go fast.”

Advertisement

Beyond Central Command

“This CSO is essentially a series of control gates and evaluations by which we intend to winnow down the best athletes that will culminate in participation within Centcom users at an exercise. So ultimately, our users will interact with the best athletes and give us feedback on which platforms, if any, really addressed the mission need,” Foster explained.

In the official post on TradewindAI, officials also wrote that the CDAO and its partners intend to accept and review responses to the announcement periodically for inclusion in the Digital Falcon Oasis exercises.

Moore and Foster said Central Command has experimented with computer vision in prior digital exercises, and has seen a lot of good returns on the feedback they can offer for model builders. 

Now, this work is meant to be a continuation and flash expansion of that, which also opens the aperture to commercial partners.

Advertisement

“Relative to inclusion in Digital Falcon Oasis, Centcom has a periodic drumbeat by which we perform operational validation on various capabilities and experiment with all things digital modernization,” Foster noted.

“The extent to which industry has had an opportunity to play in those really varies from opportunity to opportunity. But we wanted to make it clear that participating in Desert Sentry is going to be in context to a broader exercise narrative that Centcom advocates for, champions and drives,” he said.

While responses to the CSO are only going to be accepted through June 30, the general solicitation on that landing page will be periodically reviewed, updated and amended to garner information regarding other solutions that meet Centcom’s shifting mission needs.

“This is not a Centcom-unique problem. So, we hope that this opportunity becomes more widespread when we work with the CDAO,” Moore noted.

“And one of the key things that was attractive about working with the CDAO with a commercial solutions opening vehicle, in particular, is that it’s flexible,” Foster added.

Advertisement

The technology and data chiefs also emphasized that in this process they are bringing together a panel of subject matter experts from across a broad range of organizations to review these methods.

“What has excited me and has made this easy to me is that it really is so focused on users being able to help themselves, teaching us how to fish instead of handing us fish,” Moore said.

“That approach in computer vision, in particular, has sometimes been a challenge because there are questions of compute time and latency of time from a picture being snapped to the time when AI is run on it and then you get a detection on the backend. This is getting at shortening that time and making the entirety of the cycle of model development drop to as low as you can. And I don’t think that I’ve seen that anywhere in the department,” she told DefenseScoop.

Brandi Vincent

Written by Brandi Vincent

Brandi Vincent is DefenseScoop's Pentagon correspondent. She reports on emerging and disruptive technologies, and associated policies, impacting the Defense Department and its personnel. Prior to joining Scoop News Group, Brandi produced a long-form documentary and worked as a journalist at Nextgov, Snapchat and NBC Network. She was named a 2021 Paul Miller Washington Fellow by the National Press Foundation and was awarded SIIA’s 2020 Jesse H. Neal Award for Best News Coverage. Brandi grew up in Louisiana and received a master’s degree in journalism from the University of Maryland.

Latest Podcasts