When we last checked in with the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, their evolving robots had learned how deceive other robots about the location of a resource. Since then, their robots have continued to evolve, learning how to navigate a maze, beginning to cooperate and share, and even developing complex predator-prey interactions.
As before, the Swiss scientists placed within the robot’s operating system both basic instructions, and some random variations that changed every generation in virtual mutations. After each trial, the code for the more successful robots got passed on to the next generation, while the code for the less successful robots got bred out.
This time, however, the researchers designed a whole new menagerie of robots, including a set of hunter robots that pursue prey-bots, maze-running robots, and robots designed to deposit a token in a given area.
For the first experiment, the scientists created two sets of bots: predator bots with better eyesight, and prey bots with more speed. Initially, the predator was only programmed to find the prey and then drive towards it, while the prey was only programmed to move away when it detected the predator. At first, the robots just bounced towards and away from each other randomly. But over 125 generations, the hunter-bots learned to approach the prey from blind spots and to hide against the walls in wait, while the prey-bots learned to stay away from the walls and retreat with its sensors facing the hunter-bots, so it could keep the danger in sight.
In the maze experiment, robots with six sensors on one side and two sensors on the other started with the basic programming of running the maze, and reproducing less if the sensors were trigger by a bump into the wall. After less than 100 generations, the robots had not only evolved the ability to navigate the maze without any wall collisions, but even learned to have the side with more sensors face the direction of travel.
With the final experiment, the scientists created robots that got points for placing tokens in a marked area. The more points, the more offspring. The catch was two types of tokens: one small enough for an individual to push, but worth fewer points, and a bigger token requiring two robots to move, but worth more points. Not only did the robots evolve to help each other, but like in nature, they evolved to only help those robots from the same code lineage, a trait called “kin selection” in biology.
Most amazingly, the code for the robots in all the experiments was amazingly short. In fact, for the token-moving experiment, the robots only had the programing equivalent of 15 neurons. By coaxing such complex behavior out of limited programming, the the Laboratory of Intelligent Systems team proved, once again, that some of nature’s most complex behaviors are emergent phenomena that grow out of very simple instructions.