MIT Tests Thinking SeaBots
SHARE

In the ocean depths, off the western coast of Australia, a robot captain was listening to its subordinates. This isn’t the plot of a science fiction. Instead it’s part of a research project that will explore our seas better. For three weeks, MIT engineers tested several underwater autonomous robots with decision-making programming inspired by starship crew functions.

Space is a good place to look for autonomous bot inspiration. Not because there are autonomous systems out there, but because the needs of space encourage expeditionary vessels to have all its thinking on-board. Brian Williams, a professor of aeronautics and astronautics at MIT who worked on the system, says their underwater bots were modeled after the crew hierarchy found in Star Trek, and it’s not hard to see why. On board the Enterprise, a captain solicits input from specialized subordinates and then makes the call in accordance with the mission (Picard) or in spite of it (Kirk).

MIT’s system means the robot has a set of restrictions, a main objective, and then different hierarchical programs tasked with fulfilling these objectives. The bot accomplishes these tasks all on board the robot, which means autonomous underwater vehicles using this system can operate far from the remote-controlled tethers currently used by shipboard humans.

Autonomous machines that can explore without direct human control could usher in a whole new age of discovery. The National Oceanic and Atmospheric Administration estimates that “95 percent of this realm remains unexplored, unseen by human eyes.” Boldly go, robots.