Google’s new robot butler was trained on social media and Wikipedia articles

Google's subsidiary, Everyday Robots, is using advanced language modeling to teach its robots how to interpret complex conversations.
Everyday Robots helper robot picking up snack bar off of table
"Here, let me get that for you!" Everyday Robots

Share

Robot butlers have long been a staple in our pop culture depictions of the future, but it’s much easier to dream about their existence than to make them a reality. As Wired and others reported earlier this week, however, researchers are at least one step closer to fleets of assistant bots—just don’t expect them to roll out at home or in the office for quite a while.

Google recently showcased one of its newest projects courtesy of its partnership with Everyday Robots, a company originating within Alphabet Inc.’s X division tasked with researching “moonshot” projects such as computational agriculture and atmospheric water harvesting. Based on technology similar to what is fueling the recent wave of buzz-worthy chatbots like OpenAI’s GPT-3 text generator, Everyday Robots’ assistant utilizes Google’s advanced Pathways Language Model (PaLM) system to parse user inputs using vast troves of speech data culled from the internet and human interactions. From there, it decides on a proper response action that makes the most sense to it employing a complimentary system called SayCan.

[ Related: “Researchers used AI to explain complex science. Results were mixed. ]

For example, one Google research scientist typed “I’m hungry” into a laptop connected to Everyday Robots’ one-armed bot that vaguely resembles a large parking meter. The robot then considered the statement, rolled itself towards a nearby counter, and returned with a bag of chips—notably, this chosen solution wasn’t a preprogrammed one, but a decision based on copious conversational databases constructed from books, social media, Wikipedia articles, and other language-heavy online sources.

While the system still requires human training to learn myriad physical “solutions” to typed commands—other examples include cleaning up spilled liquid with a sponge and opening drawers to retrieve items—the ability for AI to determine which response works for nuanced human interactions is a huge step forward in more streamlined, accurate, and helpful robotics.

These systems are not without their faults, of course. Language modeling based on internet sourcing (unsurprisingly) often produces racist, inaccurate, or misleading results. While it’s unlikely that Everyday Robots’ new prototypes will offend users with its solution to grabbing snacks or cleaning dishes, these inherent issues require careful attention and modification to ensure the best robo-future possible… whenever that comes around, of course. “It’s going to take a while before we can really have a firm grasp on the direct commercial impact,” Vincent Vanhoucke, senior director of Google’s robotics research, recently admitted. So, for now, you’re gonna have to grab those chips from the cabinet yourselves.