SHARE

On March 2, the US Navy pulled an F-35C from the ocean. The $94.4 million jet came in hard for a landing on January 24, then skidded across the deck, injuring sailors before it plummeted off and into the sea. The pilot ejected and survived, but the incident raises an ominous question over aircraft operations: What can the military do to ensure pilots land safely as many times as they take off, and help all aviators avoid the mistakes of their peers?

The F-35C is a Navy plane, and aircraft carrier landings are notoriously hard. But no branch of the military is immune to crashes, and both the Marines and Air Force have crashed planes this year. Post-crash investigations, pulling from the recorded avionics and telemetry data of the planes, can reveal the specific causes of error, from mechanical failure to choices made by pilots.

In 2020, the Air Force turned to artificial intelligence to catch unusual flight patterns during training, before they become a costly or even tragic error. To better understand outliers in flight patterns, the Air Force is working with Crowdbotics, an artificial intelligence/machine learning firm, to analyze and process the data that planes already collect. This data processing and analysis is done with custom software, which both the company and the Air Force refer to as a specific tool.

“Fighter aircraft are one of the biggest investments in the American military. They are super advanced pieces of technology with a human being attached to them,” says Crowdbotics CEO Anand Kulkarni. Fighters are “extremely well instrumented with data production machinery, and all of that data more or less is thrown away at the end of every flight.”

Planes capture this data, the avionics and flight telemetry, many times a second, creating a record of time, speed, and position. It’s a massive data set produced by every flight, and one that is hard for humans to process without the aid of data analysis tools. At present, that data can be used in debriefings, where pilots sit after a mission and watch the flights play out on a monitor in the space of a couple hours. That’s enough time to catch any big changes, like a jet’s sudden break with formation, but the data has the potential to reveal much more. 

“Typically at the end of the debrief the student keeps some notes, but we erase our tapes,” says Major Mark Poppler, of the 4th F-15E training squadron. “We erase our shot sheets and all the data gets flushed. That’s how this project came about. My predecessor recognized this and thought, given the advances we’ve made in computing in recent decades, how can we automate a lot of this process to make debrief more efficient and then to flush less data?”

At present Crowdbotics’ work with the Air Force is limited to the F-15Es in the training squadron at Seymour Johnson Air Force Base in North Carolina. The program is in the Phase 2 stage, with much more of its potential awaiting success and expansion to other aircraft.

The Crowdbotics screen.
The Crowdbotics screen. Crowdbotics

Takeoffs and landings

Even limited to just the training flights of F-15E pilots, Crowdbotics’ prototype analytical tool appears to make it possible to catch performance deviations before they become a major problem. Consider the work of landing an aircraft, something every flight should include, which ideally are routine events, not disasters. Data collected by planes and processed through the prototype tool built by Crowbotics can see if anything is amiss.

“What air speed you execute an approach and a landing in a Strike Eagle [an F-15E] is dependent on your fuel weight,” says Poppler. “And so [the prototype tool] can actually calculate final approach speed based on your fuel weight. And then grade you to the same standards that an evaluator pilot would grade you: Was your approach on speed? Did you touchdown fast? Did you touchdown in the appropriate portion of the runway?”

[Related: Everything to know about the Air Force’s new fighter jet, the F-15EX Eagle II]

Those are all questions that can be easily answered with avionics data, but are hard for an instructor in another plane or on the ground to make sense of. In training, if a pilot is found to consistently take a landing angle too sharp, the data could catch it before the instructor, and the instructor could adjust accordingly. By processing flight data from the same pilots over time and across a program, the Air Force could use the tool to assess how an individual improves. By looking at flight data collected across a squadron, the data can detect if a pilot is doing something different than everyone else.

“The way that I look at this and the way that the software looks at it, whenever we see outliers, we don’t necessarily start by saying this is good or bad,” says Kulkarni, of Crowdbiotics. “We say ‘this is different from the book’ or different from the norm of what most pilots do.” But different in this case could mean worse—or better.

Catching errors is important for preserving the safety of the pilots and the aircraft. Spotting innovation allows new techniques to disseminate much faster than waiting for a pilot to complete a career and return as an instructor. It has the potential to move the transfer of knowledge from a generational exchange to a peer exchange. 

The execution of all things

Crowdbotics’ contract with the Air Force is formally for “Standardizing and Optimizing USAF Pilot Training With Machine Learning and Deep Maneuver Analytics.” The tool can analyze flights in a simulator the same way it can analyze flight recorder data, and bring greater understanding of normal operations and deviations to flight analysis. With optimization, it can also break from a one-size-fits-all approach to teaching pilots. Every year, the training squadron takes in 40 to 50 pilots, with the expectation of graduating as many as possible to then serve 10-year commitments in the Air Force. It’s a kind of batch processing training that endures in giant bureaucratic organizations like the military.

Using specific data from each pilot, the Air Force can instead better allocate instructor time among all those pilots, perhaps identifying those who need more help and then focusing on them than focusing on teaching the top performers. 

The data can help find which pilots need help, which pilots have already demonstrated mastery, and which trainees may be best suited to a different aircraft. It is still early in the program, but having the data means the Air Force can use actual flight analytics to assess the readiness of pilots, supplementing instructor evaluations and hopefully promising better outcomes than existing methods.

This can also apply to getting pilots ready for missions. If a specific mission calls for the F-15Es to be used as ground bombers, a commander could look at the record from her squadron and pick pilots based on how well they have flown those missions in the past. (The F-15 was originally designed as purely an air-to-air fighter, but the two-seater F-15E variant is designed to attack targets on the ground, while retaining the fighting ability of the class.)

Right now, the prototype is “proving that they can recognize and create maneuvers,” says Poppler. “What we chose to start with is really single-ship [one aircraft], unclassified, maneuvers, takeoffs, landings, instrument approaches, loops, kind of more aerobatics, things like that.”

In the future, the tool may be used to examine the more complex maneuvers that show up in training for air-to-air combat. For now, the Crowdbiotics tool is building a body of data to capture what regular flight looks like, what outlier flights look like, and what if anything can be done to train pilots to reach the best outcomes in their planes.

“So if you are flying and if you’re coming in too hot, if your angle attack is angle of attack is too high on the ending, or if in a combat situation, you responded improperly to an attacker or to a defender, or in a way that was suboptimal, the system will tell you that. It’ll identify what you should have done, and then tell you exactly what the deviation was from it,” says Kulkarni. “That’s whether it’s simulated data or real, the only difference between simulator data and real data is you get more data points in the real data stream.”

Should the technology prove valuable to the Air Force, it could expand to other aircraft types, and Crowdbotics is open to exploring it for commercial flight analysis as well. Flying is a data-rich activity, and pilots can gain by learning from that data before they’re in a crash.