Something incredible is happening in a lab at Duke University’s Center for Neuroengineering—though, at first, it’s hard to see just what it is. A robot arm swings from side to side, eerily lifelike, as if it were trying to snatch invisible flies out of the air. It pivots around and straightens as it extends its mechanical hand. The hand clamp shuts and squeezes for a few seconds, then relaxes its grip and pulls back to shoot out again in a new direction. OK, nothing particularly astonishing here—robot arms, after all, do everything from building our cars to sequencing our DNA. But those robot arms are operated by software; the arm at Duke follows commands of a different sort. To see where those commands are coming from, you have to follow a tangled trail of cables out of the lab and down the hall to another, smaller room.
Inside this room sits a motionless macaque monkey.
The monkey is strapped in a chair, staring at a computer screen. On the screen a black dot moves from side to side; when it stops, a circle widens around it. You wouldn’t know just from watching, but that dot represents the movements of the arm in the other room. The circle indicates the squeezing of its robotic grip; as the force of the grip increases, the circle widens. In other words, the dot and the circle are responding to the robot arm’s movements. And the arm? It’s being directed by the monkey.
Did I mention the monkey is motionless?
Take another look at those cables: They snake into the back of the computer and then out again, terminating in a cap on the monkey’s head, where they receive signals from hundreds of electrodes buried in its brain. The monkey is directing the robot with its thoughts.
For decades scientists have pondered, speculated on, and pooh-poohed the possibility of a direct interface between a brain and a machine—only in the late 1990s did scientists start learning enough about the brain and signal-processing to offer glimmers of hope that this science-fiction vision could become reality. Since then, insights into the workings of the brain—how it encodes commands for the body, and how it learns to improve those commands over time—have piled up at an astonishing pace, and the researchers at Duke studying the macaque and the robotic arm are at the leading edge of the technology. "This goes way beyond what’s been done before," says neuroscientist Miguel Nicolelis, co-director of the Center for Neuroengineering. Indeed, the performance
of the center’s monkeys suggests that a mind-machine merger could become a reality in humans very soon.
Nicolelis and his team are confident that in five years they will be able to build a robot arm that can be controlled by a person with electrodes implanted in his or her brain. Their chief focus is medical—they aim to give people with paralyzed limbs a new tool to make everyday life easier. But the success they and other groups of scientists are achieving has triggered broader excitement in both the public and private sectors. The Defense Advanced Research Projects Agency has already doled out $24 million to various brain-machine research efforts across the United States, the Duke group among them. High on DARPA’s wish list: mind-controlled battle robots, and airplanes that can be flown with nothing more than thought. You were hoping for something a bit
closer to home? How about a mental telephone that you could use simply by thinking about talking?
The notion of decoding the brain’s commands can seem, on the face of it, to be pure hubris. How could any computer eavesdrop on all the goings-on that take place in there every moment of ordinary life?
Yet after a century of neurological breakthroughs, scientists aren’t so intimidated by the brain; they treat it as just another information processor, albeit the most complex one in the world. "We don’t see the brain as being a mysterious organ," says Craig Henriquez, Nicolelis’s fellow co-director of the Center for Neuroengineering. "We see 1s and 0s popping out of the brain, and we’re decoding it."