The system builds upon classical recognition systems as well as another system called “simultaneous localization and mapping” (SLAM), which allows devices like autonomous vehicles or robots to have a three-dimensional spatial awareness. The team’s new “SLAM-aware” system maps out its environment while it collects information about objects from multiple viewpoints. With each new angle, the program is able to predict what the objects are by breaking them down into their more basic components. It then compares this compiled description to a database of existing descriptions of objects. For example, if the SLAM-aware system sees a chair, it may break it down as a seat, four legs, and a back.