Advertisement

Pentagon takes AI dogfighting to next level in real-world flight tests against human F-16 pilot

Officials provided an update on DARPA’s Air Combat Evolution program.
The X-62 VISTA flies in the skies over Edwards Air Force Base, California, March 23, 2023. (Air Force photo by Ethan Wagner)

Flight tests overseen by the Defense Advanced Research Projects Agency and the Air Force have demonstrated safe and effective employment of an autonomous fighter jet enabled by AI, including in “nose-to-nose” dogfighting against a human F-16 pilot, according to officials.

A few years, during DARPA’s AlphaDogFight Trials, algorithms went undefeated in computer simulated battles against a military aviator. More recently, the agency’s Air Combat Evolution program has been using a modified F-16 known as the X-62A VISTA (Variable In-flight Simulator Test Aircraft) to put machine learning agents through their paces in the skies above Edwards Air Force Base, California.

A total of 21 test flights were conducted for the project between December 2022 and September 2023, the Department of Defense said in an ACE program update released Wednesday.

“Beginning in December of 2022, that was the first application of machine learning agents to control the flight path of fighter aircraft,” Col. James Valpiana, commandant of the Air Force Test Pilot School, said in a video accompanying the update.

Advertisement

More than 100,000 lines of flight-critical software changes were made over time to improve the tools.

Then, in September, “we actually took the X-62 and flew it against a live manned F-16. We built up in safety using the maneuvers — first defensive, then offensive, then high aspect nose-to-nose engagements where we got as close as 2,000 feet at 1,200 miles per hour,” Lt. Col. Maryann Karlen, deputy commandant of the test pilot school, said in the video.

The exercise marked “the first AI vs human within-visual-range engagement (a.k.a. ‘dogfight’), conducted with actual manned F-16 aircraft,” the DOD said in the program update.

At publication, the department had not provided information to DefenseScoop about whether the machine learning agents controlling the X-62A beat the human pilot in that encounter.

However, defense officials touted the importance of these efforts for demonstrating that artificial intelligence technologies can operate safely in a complex warfighting environments such as close-in, air-to-air combat.

Advertisement

“In advance of formal verification methods for AI-based autonomy, the team pioneered new methods to train and test AI agent compliance with safety requirements, including flight envelope protection and aerial/ground collision avoidance, as well as with ethical requirements including combat training rules, weapons engagement zones, and clear avenues of fire,” according to the DOD program update.

“The X-62A team demonstrated that cutting-edge machine learning based autonomy could be safely used to fly dynamic combat maneuvers. The team accomplished this while complying with American norms for safe and ethical use of autonomous technology,” Secretary of the Air Force Frank Kendall said in a video, adding that the capability is “transformational.”

Officials noted that human pilots were onboard the autonomous aircraft in case anything went awry during the test flights and they needed to take over, but personnel did not have to activate the safety switch during the dogfights over Edwards Air Force Base.

Other organizations supporting the program include Calspan, Cubic Corporation, EpiSci, Lockheed Martin Skunk Works, physicsAI, Shield AI, the University of Iowa Operator Performance Laboratory, Johns Hopkins Applied Physics Laboratory, the MIT Computer Science and Artificial Intelligence Laboratory, and the MIT Lincoln Laboratory.

Kendall, a strong proponent of U.S. military adoption of AI, told lawmakers last week that he intends to fly aboard an F-16 in autonomous flight mode later this year.

Advertisement

Kendall has said the successes of the ACE program contributed to his decision to move ahead with the Air Force’s collaborative combat aircraft (CCA) program, an effort to develop and field next-generation autonomous drones for counter-air operations and other missions. The service plans to spend billions of dollars on that initiative in the coming years.

“The critical problem on the battlefield is time. And AI will be able to do much more complicated things much more accurately and much faster than human beings can. If a human being is in the loop, you will lose. You can have human supervision, you can watch over what the AI is doing, [but] if you try to intervene, you’re going to lose,” Kendall said during a panel at the Reagan National Defense Forum in December.

“I just got briefed by DARPA on some work that they’re doing on manned versus unmanned combat — basically aircraft fighters,” Kendall said. “The AI wins routinely with the way they structured the test … but the difference in how long it takes the person to do something and how long it takes the AI to do something is the key difference in the outcome. And we’re talking about seconds here. Just to give you a sense of parameters here, the best pilot you’re ever going to find is going to take a few tenths of a second to do something. The AI is going to do it in a microsecond — it’s gonna be orders of magnitude better performance. And those times actually matter. And you can’t get around that. But that’s the reality that we’re gonna have to face.”

Jon Harper

Written by Jon Harper

Jon Harper is Managing Editor of DefenseScoop, the Scoop News Group’s online publication focused on the Pentagon and its pursuit of new capabilities. He leads an award-winning team of journalists in providing breaking news and in-depth analysis on military technology and the ways in which it is shaping how the Defense Department operates and modernizes. You can also follow him on X (the social media platform formerly known as Twitter) @Jon_Harper_

Latest Podcasts