A worm’s brain may be teeny tiny, but that small organ has inspired researchers to design better software for drones. Using liquid neural networks, researchers at the Massachusetts Institute of Technology have trained a drone to identify and navigate toward objects in varying environments.
Liquid neural networks, a type of artificial intelligence tool, are unique. They can extrapolate and apply previous data to new environments. In other words, “they can generalize to situations that they have never seen,” Ramin Hasani, a research affiliate at MIT and one of the co-authors on a new study on the topic, says. The study was published in the journal Science Robotics on April 19.
Neural networks are software inspired by how neurons interact in the brain. The type of neural network examined in this study, liquid neural networks, can adapt flexibly in real-time when given new information—hence the name “liquid.”
The researchers’ network was modeled after a 2-millimeter-long worm, Caenorhabditis elegans. Naturally, it has a small brain: 302 neurons and 8,000 synaptic connections, allowing researchers to understand the intricacies of neural connections. A human brain, by contrast, has an estimated 86 billion neurons and 100 trillion synapses.
“We wanted to model the dynamics of neurons, how they perform, how they release information, one neuron to another,” Hasani says.
These robust networks enable the drone to adapt in real-time, even after initial training, allowing it to identify a target object despite changes in their environment. The liquid neural networks yielded a success rate of over 90 percent in reaching their target in varying environments and demonstrated flexible decision-making.
Using this technology, people might be able to accomplish tasks such as automating wildlife monitoring and search and rescue missions, according to the researchers.
Researchers first taught the software to identify and fly towards a red chair. After the drone—a DJI quadcopter—proved this ability from 10 meters (about 33 feet) away, researchers incrementally increased the start distance. To their surprise, the drone slowly approached the target chair from distances as far as 45 meters (about 145 feet).
“I think that was the first time I thought, ‘this actually might be pretty powerful stuff’ because I’d never seen [the network piloting the drone] from this distance, and it did it consistently,” Makram Chahine, co-author and graduate researcher at MIT, says, “So that was pretty impressive to me.”
After the drone successfully flew toward objects at various distances, they tested its ability to identify the red chair from other chairs in an urban patio. Being able to correctly distinguish the chair from similar stimuli proved that the system could understand the actual task, rather than solely navigating towards an image of red pixels against a background.
For example, instead of a red chair, drones could be trained to identify whales against the image of an ocean, or humans left behind following a natural disaster.
“Once we verified that the liquid networks were capable of at least replicating the task behavior, we then tried to look at their out-of-domain performance,” Patrick Kao, co-author and undergraduate researcher at MIT, says. They tested the drone’s ability to identify a red chair in both urban and wooded environments, in different seasons and lighting conditions. The network still proved successful, displaying versatile use in diverse surroundings.
They tested two liquid neural networks against four non-liquid neural networks, and found that the liquid networks outperformed others in every area. It’s too early to declare exactly what allows liquid neural networks to be so successful. Researchers say one hypothesis might have something to do with the ability to understand causality, or cause-and-effect relationships, allowing the liquid network to focus on the target chair and navigate toward it regardless of the surrounding environment.
The system is complex enough to complete tasks such as identifying an object and then moving itself towards it, but not too complex to prevent researchers from understanding its underlying processes. “We want to create something that is understandable, controllable, and [artificial general intelligence], that’s the future thing that we want to achieve,” Hasani says. “But right now we are far away from that.”
AI systems have been the subject of recent controversy, with concerns about safety and over-automation, but completely understanding the capabilities of their technology isn’t just a priority, it’s a purpose, researchers say.
“Everything that we do as a robotics and machine learning lab is [for] all-around safety and deployment of AI in a safe and ethical way in our society, and we really want to stick to this mission and vision that we have,” Hasani says.