Instead of envisioning robots as either mindless slaves or potential overlords, couldn’t we just figure out how to all work together? Cognitive scientists, neuroscientists, and psychologists are teaming up with roboticists to do just that — developing teamworkbots that know how to read their partner’s actions and intentions and to predict what he or she will do next as they complete tasks together.

The multidisciplinary team of the EU-funded Joint Action Science and Technology (JAST) project first analyzed human-human collaborations, seeing what kind of behavior and brain function are at play when we work (or don’t work) well together, and then applied the research into building proactive robots.

The eventual aim is to build robots that can ask questions, discuss and explore possibilities, assess their companion’s ideas, and anticipate what their partners might do next.

The guiding theory behind the robots was the idea of “mirror neurons,” the neurons that are activated when people observe others doing an activity. These neurons resonate as if they were mimicking the activity; the brain learns by copying what is going on. In the JAST project, a similar resonance was discovered during joint tasks: people observe their partners and the brain copies their action to try to make sense of it.

Then the roboticists got to work building robots that could observe and mirror human behavior. In most cases, they built robots that already knew how to complete the task at hand — the researchers wanted to test how they would, as researcher Wolfram Erlhagen from the University of Minho said, “observe behavior, map it against the task, and quickly learn to anticipate [partner actions] or spot errors when the partner does not follow the correct or expected procedure.”

Then they set the robots loose into unstructured interactions with humans. The task: to build a complicated model toy. In one scenario, the robot was the “teacher” — guiding and collaborating with human partners and in another, the robot and the human were on equal terms.

The aim was to see if the robot could figure out on its own what to do without being told. By observing how its human partner grasped a tool or model part, for example, the robot was able to predict how its partner intended to use it. Clues like these helped the robot to anticipate what its partner might need next. “Anticipation permits fluid interaction,” says Erlhagen. “The robot does not have to see the outcome of the action before it is able to select the next item.”

The robots were also programmed to deal with suspected errors, and seek clarification when their partners’ intentions were ambiguous. For example, if one piece could be used to build three different structures, the robot had to ask which object its partner had in mind.