AI photo
SHARE

For many, the word “apprentice” brings to mind the whimsical scene from Disney’s Fantasia where Mickey Mouse, the poor peon to the sorcerer, creates a mess of his master’s workspace when the brooms and buckets come to life in a magical musical number. The helicopter apprenticeship at Stanford University seems to contain some of the same elements of unrealism, as helicopters learn to fly and execute complex airborne tricks without a human pilot in the cockpit. In this case, however, there is science behind the magic.

Computer scientists at Stanford University have developed an artificial intelligence system that learns to perform in-flight maneuvers by watching human-operated helicopters do the same. Instead of having computer programmers, laymen when it comes to piloting aircraft, labor away translating exact flying instructions into code, the helicopters learn from human experts, in this case, radio control pilot Garett Oku. The scientists had Oku perform a complete airshow multiple times over while they recorded every movement of the helicopter. They then translated those movements into an algorithm that they could “teach” to the robotic helicopter. Numbers, unlike people, don’t vary, meaning the robotic helicopter could perform the maneuvers more reliably than Oku.

The recap of a recent test run in Palo Alto sounds like an intricate ballet: The robotic helicopter performed a five-minute airshow, replete with traveling flips, rolls, loops with pirouettes, stall-turns with pirouettes, a knife-edge, an Immelmann, a slapper, an inverted tail slide, and something called a “hurricane.” The helicopter was even able to execute the “tic toc,” a move in which it hovers side-to-side like a pendulum while its nose points directly upward. Not only is this more than any other robotic helicopter has been able to do previously, it also exceeds the capabilities of a full-scale, human-piloted helicopter.

Though the robot’s tricks may seem effortless, there is a lot of work being done behind the scenes, and a lot of risks. A helicopter is an inherently unstable machine. Left without instructions for even a moment, it will tip and crash. This makes the computer code extremely complicated to write. Preliminary tries worked fine for simple moves, but couldn’t handle the hurricane. Having the robot copy the exact movements of the joystick when operated by Oku didn’t cut it either, since that method couldn’t account for unpredictable variables like wind gusts. Eventually, teasing the perfect algorithm from an expert radio control pilot was the solution.

The robotic helicopter, painted in Stanford colors, is a juiced-up version of your standard ‘copter, carrying accelerometers, gyroscopes and magnetometers (the latter deduce the direction the helicopter is pointed). With these as well as instruments on the ground, the position, direction, orientation, velocity, acceleration and spin of the helicopter are continuously monitored throughout flight. New flight directions are sent from a lightning-fast ground-based computer to the helicopter via radio at the rate of 20 times per second.

Future applications will likely place these auto-copters straight into the danger zone: searching for land-mines in war zones or informing California firemen of susceptible wildfire spots in real time, so they don’t have to rely on information that can be several hours old, as they often do now.

For now, the Stanford apprentice program is making strides (or you could say, loops with pirouettes) towards these goals. If helicopters can careen themselves artfully through the air, then precise robotic mops and brooms can’t be too far behind, allowing humans to get back to more pressing activities, like watching the rest of Fantasia.

[Via Stanford News Service]