SHARE

For more than two hours on April 29, an advanced robot brain flew a Kratos-built Mako drone fighter exactly as it was supposed to. The flight test was a key milestone in the Air Force’s effort to develop a new, useful autonomy system for a growing fleet of plane-sized robot fighters.

Dubbed the “autonomy core system,” or ACS, the new robot control system is crucial to the Air Force’s ominously named Skyborg program—an AI that can pilot an airplane to fight and fly alongside human pilots, and learn in the process. 

This initial test was flown over the same Tyndall Air Force Base that previously experimented with the four-legged Ghost robots for base security. Before Skyborg flies alongside human-inhabited planes, the Air Force wants it to master the basics, which was the point of the April test. The Mako is a relatively cheap test platform for exercises like this, adapted from an aerial target drone.

These basics include the routine parts of flight, like taking off, staying airborne, and landing without incident, as well as more complex activities, like receiving navigational commands from humans. Part of the test included seeing if Skyborg could navigate the drone around a geofence, or an internally programmed barrier in the sky defined by GPS points. The Air Force watched the plane from the ground and from the sky, and tracked its progress on those tasks and others. 

[Related: The future of air combat is drones launching more drones]

“We’re extremely excited for the successful flight of an early version of the ’brain‘ of the Skyborg system. It is the first step in a marathon of progressive growth for Skyborg technology,” Brigadier General Dale White, the Air Force’s program executive officer for fighters and advanced aircraft, said in a release. “These initial flights kick off the experimentation campaign that will continue to mature the ACS and build trust in the system.”

The publicly stated aim of the Skyborg program is to create robot airplanes that can fly missions fully autonomously, while being much cheaper than expensive crewed fighters (like the stealthy, pricey F-35). They are also far more expendable, though the word the Air Force uses is “attritable.” This is less about actively making a cheap, almost disposable drone, and more about having an airplane cheap enough that its loss in combat is not a significant cost or, perhaps as important, a loss of capability.

An Air Force artificial intelligence program flew a drone fighter for hours
The test was on April 29 at Tyndall Air Force Base in Florida. US Air Force

Skyborg is not a single drone design, but is focused on creating and flying various drone bodies for a similar purpose. The term specifically refers to the kind of AI that will direct autonomous drone planes as they fly alongside inhabited fighters. The Air Force is contracting with three companies to actually build prototype fighter-jet drones to do the flying.

[Related: Boeing’s new autonomous fighter jet has a pop-off, swappable nose]

When Skyborg was first announced back in 2019, Will Roper, the Air Force’s assistant secretary for acquisition, specifically likened it to R2-D2, the robotic copilot of Luke Skywalker in Star Wars.

“I expect the first things that we’ll do will not appear as sexy as what you might imagine in a movie, but will be completely game-changing,” Roper said.

By figuring out the software for the Skyborg first, the Air Force can work on adapting that code to airframes later, allowing bodies to multiply and adapt as threats and needs change.

Crucial to that process will be how the AI onboard the drones takes in, stores, and adapts to new sensor information. Machine learning offers the potential for an entire fleet of drones to learn from the actions and data of each individual craft. Training algorithms often requires tremendous amounts of data, especially novel data. Some of that will be generated in training flights, which should make the aircraft decent enough at all the scenarios it is explicitly trained for.

What will pose a challenge to Skyborg in the future, and what will likely vex any military project built around autonomous machines trained on available data, will be unusual situations. If the drone is used to flying along an F-35, and then departing on an attack run, what happens if its data link is severed, or if it is fed false information, or if hostile drones interfere with the flight path? Battle is the accumulation of expected risks creating wholly new scenarios. Trusting robots to navigate it smoothly means placing a tremendous amount of faith in a kind of data processing specifically designed to unearth unexpected results.

Over the next few months, the Air Force Research Laboratory plans to experiment more with the early Skyborg planes. Navigation with and around autonomous craft in the sky is an essential part of the Air Force’s vision for future fighter combat. Robot-assisted combat could finally mitigate the trend towards increasingly expensive fighters, and force countries to drastically rethink air force planning in the most significant way since the widespread adoption of jets and missiles.

Before all of this happens, Skyborg needs to prove it can play nice with other planes, and if not, needs to demonstrate that its autonomous core system can adapt so that it does so the next time it flies.

MORE TO READ