Robots are all thumbs. The human hand is remarkably complex, and although we’ve seen some interesting attempts at replicating it, we’re not quite there yet. Instead, some engineers are teaching robots to make do with what they have.
Two MIT students recently unveiled algorithms that robots could use to “think” their way through picking up and placing an object, and they used PR2, picker-upper ‘bot extraordinaire, to demonstrate . The first algorithm, from PhD student Jennifer Barry, shows a robot ways to push objects near the edge of a table so it can more easily grab them. The algorithm focuses in on the object, disregarding some of the several spatial dimensions it has to work in.
From the MIT release:
Add in a three-dimensional object with three different axes of orientation, which the robot has to push across a table, and the size of the search space swells to 16 dimensions, which is too large to search efficiently. Barry’s first step was to find a concise way to represent the physical properties of the object to be pushed — how it would respond to different forces applied from different directions. Armed with that description, she could characterize a much smaller space of motions that would propel the object in useful directions. “This allows us to focus the search on interesting parts of the space rather than simply flailing around in 16 dimensions,” she says. Finally, after her modification of the motion-planning algorithm, she had to “make sure that the theoretical guarantees of the planner still hold,” she says.
The other project, from senior Annie Holladay, you can check out in the video above. Instead of the robot working out how to put an object on a table, it works out how not to do it, calculating what will make the object fall, then acting to stop it from happening. When dealing with a light object that can tip easily when set down, the robot brings in its other arm to steady it.
Great. We’ll take our robo-place-setters now.