According to developmental psychologists, as infants, we learn to govern our bodies through a process of random experimentation and feedback. We contort our faces into weird shapes, watch our parents react, and then switch up our movements accordingly.
Now, computer scientists at the University of California, San Diego are applying this same strategy to robotics research. Through the use of machine learning, they’ve made it possible for their robot–an Einstein lookalike–to teach itself to make realistic facial expressions.
The team has been upgrading their Einstein-bot for the past several years, and in the past had to manually program the robot’s 31 servo-driven facial “muscles” for it to form the right expressions. Now the robot has learned to guide itself. To quote the team’s release:
Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot could master facial expressions it had never encountered before.
So far, Einstein has managed to learn to convey emotions like sadness, anger, joy, and surprise. It’s only a matter of time before it can stick out its tongue.