SHARE

Imagine you’re lifting a couch with a friend. You’re both at opposite ends, and need to communicate as to when to heft it up. You could go for it at the count of three, or maybe, if you’re mentally in sync, with a nod of the head.

Now let’s say you’re doing the same with a robot— what’s the best way to tell it what to do, and when? Roboticists at MIT have created a mechanical system that can help humans lift objects, and it works by directly reading the electric signals produced by a person’s biceps.

It’s a noteworthy approach because their method is not the standard way that most people interact with technology. We’re used to talking to assistants like Alexa or Siri, tapping on smartphones, or using a keyboard, mouse, or trackpad. Or, the Google Nest Hub Max, a smart home tablet with a camera, can notice a hand gesture indicating “stop” that a user makes when they want to do something like pause a video. Meanwhile, robot cars—autonomous vehicles—perceive their surroundings through instruments like lasers, cameras, and radar units.

But none of those robotic systems are measuring a person’s flex the way this bot does. And in a situation where a person is lifting an object, a robot listening for voice commands or using cameras may not be the best approach for it to know when to lift, and how high.

The bicep-sensing robot works thanks to electrodes that are literally stuck onto a person’s upper arm and connected with wires to the robot. “Overall the system aims to make it easier for people and robots to work together as a team on physical tasks,” says Joseph DelPreto, a doctoral candidate at MIT who studies human-robot interaction, and the first author of a paper describing the system. Working together well usually requires good communication, and in this case, that communication stems straight from your muscles. “As you’re lifting something with the robot, the robot can look at your muscle activity to get a sense of how you’re moving, and then it can try to help you.”

The robot responds to your muscles signals in two basic ways. At its simplest, the robot senses the signals—called EMG signals—from your biceps as you move your arm up or down, and then mirrors you. You can also flex your biceps without actually moving your arm—tense your muscle, or relax it—to instruct the robot hand to move up or down.

The system also interprets more subtle motions, something it can do thanks to artificial intelligence. To tell the robotic arm to lift up or down in a more nuanced way, a person with the electrodes on their upper arm can move their wrist slightly up twice, or down once, and the bot does your bidding. To accomplish this, DelPreto used a neural network, an AI system that learns from data. The neural network interprets the EMG signals coming from the human’s biceps and triceps, analyzing what it sees some 80 times per second, and then telling the robot arm what to do.

It’s easy to see how a system like this could help anyone tasked with doing physical labor, and this research was partially funded by Boeing. “We can see this being used for factories, [or] construction areas where you’re lifting big or heavy objects in teams,” says DelPreto. Of course, factories already commonly incorporate robots; for example, a General Motors foundry in Michigan uses robotic systems to help with jobs that are heavy, dangerous, or both, such as holding the mold for an engine block up to the spot where hot liquid aluminum flows into it. That’s a job a person can’t, and shouldn’t, do.

But the MIT system would allow for an even more direct, and perhaps more intuitive, connection between humans and machines when they’re doing something like lifting an object together. After all, humans and robots excel at different kinds of tasks. “The closer you can have the person and robot working together, the more effective that synergy that can be,” DelPreto says.