It takes up to 24 minutes for a signal to travel between Earth and Mars. If you’re a Mars rover wondering which rock to drill into, that means waiting at least 48 minutes to send images of your new location to NASA and then receive marching orders. It’s a lot of idle time for a robot that cost $2.6 billion to build.
That’s why engineers are increasingly giving spacecraft the ability to make their own decisions. Space robots have long been able to control certain onboard systems—to regulate power usage, for example—but artificial intelligence is now giving rovers and orbiters the ability to collect and analyze science data, then decide what info to send back to Earth, without any human input.
Since May 2016, NASA has been testing out an autonomous system on the Curiosity rover. A new report shows that the new system, named AEGIS (Autonomous Exploration for Gathering Increased Science), is working well, and has the potential to accelerate scientific discoveries.
“Right now Mars is entirely inhabited by robots,” says Raymond Francis, who’s part of Jet Propulsion Laboratory’s AEGIS software team, “and one of them is artificially intelligent enough to make its own decisions about what to zap with its laser.”
AEGIS has two jobs. The first is to pick out interesting-looking rocks, then tell Curiosity’s ChemCam to shoot its lasers at the rocks, to vaporize them and analyze their composition. NASA controllers usually program Curiosity to use AEGIS after it drives to a new spot—since it will take a while for Earth to learn what’s around the rover in its new location and send a new command, AEGIS allows the rover to take a science measurement while it’s waiting.
“In that post-drive period, the rover has moved, and no one on Earth has seen where it is yet,” says Francis. “So the rover has have to be able to make the decision of what to target on Mars, because no one on Earth can be in the loop.”
The scientists taught AEGIS how to recognize bedrock, which they’re interested in because it contains clues into Mars’ past ability to support life. And 93 percent of the time, AEGIS chooses the same target a human would have chosen—but without the hour-or-so lag. It’s a big improvement over past measurements, which had ChemCam choose a target at random while waiting for NASA. Those analyses captured the best target only 24 percent of the time.
It takes about 90 to 105 seconds to target, zap, and analyze the findings, so the rover is already finished by the time NASA has new instructions for it. However, the team chooses not to run AEGIS if Curiosity’s batteries are running low, or there’s already too much data to beam back to Earth.
Curiosity’s mission is to understand the history of Mars’ Gale Crater, to figure out whether it was ever capable of sustaining life. “The way to do that is with a long-term survey,” says Francis. “AEGIS makes that survey richer by filling in the gaps… As of last week we’re up to about 90 new locations that have been studied that otherwise would not have been. A lot of those results haven’t been published yet.”
AEGIS’ second job is to correct ChemCam’s aim, to assist its human controllers when they want to analyze a very small feature on a rock. If NASA doesn’t hit it on the first try, it could mean the measurement is lost forever because the rover needs to drive away soon after, or else an entire day’s work might be lost while NASA waits to make a second attempt.
The aim-correction system can slow things down by a few minutes, but in the two times NASA has used it, it corrected shots that would have missed the target, thus saving the day.
The AEGIS software was originally developed for the Opportunity rover in 2010, to help it identify and capture pictures of boulders. Since then, “we’ve improved its ability to discriminate specific materials,” says Francis. The team is also working on adding more flexibility in pointing, selecting targets, and initiating follow-up measurements.
And when NASA’s next rover lands on the red planet in 2020, it will be able to take AEGIS-guided measurements with any of the instruments on its mast. That includes its SuperCam, which is like ChemCam but with added capabilities—like a Raman spectrometer that analyses crystal structures, and visible and infrared spectrometers that work from a distance. “So we’ll have a whole suite of instruments we can point with AEGIS in 2020,” says Francis.
Beyond Mars
In a second paper in Science Robotics, Steve Chien from Jet Propulsion Laboratory’s Artificial Intelligence Group expounds on how intelligent systems are opening up a new era of space exploration.
Satellites orbiting Earth are already able to recognize snow versus water or ice, and notice when those things change. They can analyze images as they collect them, to detect unusual events like an erupting volcano or fires or flooding, and then take action by collecting new images and data. For spacecraft further from Earth, not waiting for a command makes it easier to study short-lived phenomena like dust devils on Mars or jets of gas erupting out of a comet.
Since there isn’t always much bandwidth to send information back home, today’s spacecraft can interpret the data they collect, deciding which information is important enough to send back to Earth.
Not only do A.I. systems reduce idle time, but they can open up new capabilities. “In the future, orbiters, rovers, and aerial vehicles could autonomously organize and coordinate to better explore distant worlds,” writes Chien. Translation: swarmbots. On other planets.
And without robotic autonomy, exploring worlds like Europa, whose inner ocean might be able to sustain life, would be nearly impossible. In its orbit around Jupiter, this moon is very far from Earth, so the time delay is much worse than for Mars. Making matters more challenging, the radiation on Europa is severe, so the spacecraft will only be able to survive on the moon’s surface for a limited time before its electronics fry. So robots that can make their own decisions might be key on Europa.
But Francis cautions that we’ll need a lot more information about this icy moon before we can send an autonomous robot to the surface. “There’s a lot we still don’t know about what the surface of Europa looks like. It would be great to have a better understanding of what that environment looks like, so we can develop vision systems that would work properly there.”
And that’s just on the surface. NASA really wants to take a peek into Europa’s inner ocean, buried beneath miles of ice. Under those conditions, a life-hunting robot would probably need to explore for days, weeks, or months without human input, Chien notes.
“You’re talking about a really, truly unexplored environment,” says Francis, “and there are real challenges to making robotic systems work in places you’ve never been. Particularly if they’re going to choose science measurements on their own. There are ways to try to address that, like looking at how you detect things that are different, and how you classify and learn the different types of things that are in an environment, but those are all developments that still need to be done.”
We’ll face similar problems if we ever send a robotic explorer to Alpha Centauri, our nearest neighboring star system. In this case, any spacecraft would have to explore the system (which may contain a habitable planet) entirely on its own, since communications with Earth would take eight years, round-trip.
There’s clearly still a long way to go before we’ll be capable of launching such a mission, but Chien is hopeful. “Today’s A.I. innovations,” he writes, “are paving the way to make this kind of autonomy a reality.”