AI photo
SHARE

Some things are just harder for robots to do. You know, things like appreciating literature, or writing music. Or grasping objects with their fingers.

There are a lot of elements that go into grasping. A general grasping robot has to first figure out the important properties of what it’s trying to hold, using whatever sensors it has. Then it has to come up with different strategies for differently-shaped objects, making decisions about where to put its fingers and how hard to grip. Several robots are pretty great at grasping already, but researchers continue to try to find ways to improve robot grips.

One way engineers deal with this problem is by writing a robot’s algorithms to learn from experience. The robot spends some time in a lab, successfully and unsuccessfully picking things up and figuring out the rules for grasping on its own. Studies have shown this works well, but a team of engineers at Oregon State University decided they wanted to come up with a quicker and easier way to teach robots how to grab. Their solution: providing feedback to the robot from strangers online.

The robot that learned from picking up actual objects did better than the robot that learned via crowdsourcing, but only by a little.

The Oregon team paid Internet users to rate photos of possible grasps on a scale of 1 to 5 of how secure such a grasp would be. The grasps included 522 finger configurations for nine everyday objects. The team also had a commercially available, three-fingered robot hand try the different grasps, picking objects up and shaking them to check how secure their grips were.

The robot that learned from picking up actual objects did better than the robot that learned via crowd-sourcing, but only by a little, the Oregon team reports. The lab-trained robot earned a score of 0.766 (by a measure of accuracy called the area under the ROC curve), while the crowd-trained robot earned a score of 0.659. A perfect score would be 1.

This isn’t the first time researchers have tried teaching robots using feedback from Internet strangers. We’ve previously reported on a team of engineers that taught their robots to make cute block shapes via Mechanical Turk feedback.

The Oregon team members will present their work at a conference hosted by the Association for the Advancement of Artificial Intelligence in November.