Military AI Vanquishes Human Fighter Pilot in F-16 Simulation. How Scared Should we Be?
6 min readFrom the outside, the simulated aerial dogfight the Pentagon held two weeks ago looked like a standard demonstration of close-up air-to-air combat as two F-16 fighter jets barreled through the sky, twisting and diving as they sought an advantage over the other. Time and time again the jets would “merge,” with one or both pilots having just split seconds to pull off an accurate shot. After one of the jets found itself riddled with cannon shells five times in these confrontations, the simulation ended.
From the inside, things seemed very, very different.
“The standard things we’re trained to do as a fighter pilot aren’t working,” lamented the losing pilot, an Air Force fighter pilot instructor with the call sign Banger.
That’s because this wasn’t a typical simulation at all. Instead, the U.S. military’s emerging-technologies research arm, the Defense Advanced Research Projects Agency, had staged a matchup between man and machine — and the machine won 5-0.
The AlphaDogfight simulation on Aug. 20 was an important milestone for AI and its potential military uses. While this achievement shows that AI can master increasingly difficult combat skills at warp speed, the Pentagon’s futurists still must remain mindful of its limitations and risks — both because AI remains long away from eclipsing the human mind in many critical decision-making roles, despite what the likes of Elon Musk have warned, and to make sure we don’t race ahead of ourselves and inadvertently leave the military exposed to new threats.
All the more remarkable, Heron’s AI pilot was self-taught using deep reinforcement learning, a method in which an AI runs a combat simulation over and over again and is “rewarded” for rapidly successful behaviors and “punished” for failure. Initially, the AI agent is simply learning not to fly its aircraft into the ground. But after 4 billion iterations, Heron seems to have mastered the art of executing energy-efficient air combat maneuvers.
Human pilots could perhaps devise tactics designed to exploit the Heron AI’s limitations, just as Banger did with temporary success in the final round of the competition. But, like the Borg in “Star Trek,” the AI-powered pilot may, in turn, eventually learn from its failures and adapt. (The machine-learning algorithm was disabled during the tournament.)
However, the tournament’s focus on within-visual-range warfare with guns didn’t challenge the AI pilot to perform more complex tasks. It focused on a narrow, though foundational, slice of air warfare known as “basic fighter maneuvers,” leaving out aspects such as using sensors and missile weapons that may decide the outcome of an air battle well before the opposing fighter pilots ever come close enough to see each other.
For comparison, the U.S. Air Force’s newest fighter, the F-35, is optimized less for dogfighting and more for stealthy surprise attacks executed from beyond visual range, as well as for fighting cooperatively with friendly air and surface forces by sharing sensor data.
Even more importantly, one-on-one duels between individual fighters as occurred in the simulation are very different from likely air-battle scenarios in a major conflict, which could play out over huge distances and involve dozens of supporting air and surface units.
And machine learning still has major limitations. It can have trouble working collaboratively, even though cooperation is key to how militaries fight wars. AI agents are also known to rigidly adhere to flawed assumptions based on limited datasets, and their trial-by-error learning style can produce suboptimal outcomes thanks to the errors side of the equation when confronting novel situations.
Most armed drones today are remotely piloted by a human and only rely on autonomous algorithms to avoid crashing when their control link is interrupted. But remote control prevents the drone aircraft from reacting with the super-human speed and precision the recent tournament demonstrated they are capable of.
One concept rapidly entering the mainstream is the so-called Loyal Wingman drone controlled by a nearby manned fighter pilot who instructs the drone’s AI agent to perform specific tasks. Basically, the human handles big-picture decision-making, while the AI takes on the risky dirty work of pressing home attacks and drawing away enemy missile fire.
A key advantage of the Loyal Wingman concept is that it may cost as little as $3 million each compared to around $80 million to over $100 million for a new F-35 stealth fighter. That means the drones could be treated as reusable assets that can be sacrificed if necessary.
Indeed, there is a debate raging in military circles across the globe as to whether it’s affordable to develop a new generation of manned fighters or whether lower costs and greater convenience dictate that the next generation will be largely unmanned with at least semi-autonomous AI.
But an AI capable of performing the full range of missions doesn’t exist yet. And as AI- and remote-controlled unmanned systems proliferate, militaries will sharpen their ability to disrupt control links and hack AI systems. Therefore, caution is needed to avoid a “Battlestar Galactica” scenario wherein a heavily networked military is hindered by jamming and computer viruses.
The potential advent of autonomous war machines also arouses justified ethical and existential concerns. For example, integrating facial-recognition AI to further automate drone strikes could go wrong in all sorts of terrible ways. And we should certainly never leave AI in a position to initiate the use of strategic nuclear weapons, as has been suggested.
In a sense, the AlphaDogfight confirmed something we already knew in our gut: Given sufficiently good algorithms, AI can outperform most humans in making rapid and precise calculations in a chess-like contest with clearly defined rules and boundaries. How flexibly and cooperatively AI pilots can make decisions in the more chaotic and uncertain environment of a high-end war zone remains to be seen.
In the next few years, semi-autonomous AI will be harnessed by pilots of both manned and unmanned combat aircraft. They will eventually be delegated to piloting and attack roles that they can perform faster and more precisely than humans.
However, AI as it stands isn’t yet poised to innovate or make informed judgments in response to novel problems, and for that reason it is essential that humans remain in the loop of future robotic air wars. One of those novel problems, in fact, will be deciding just how much autonomy we can safely accord to future robotic war machines.
By Sébastien Roblin, military writer