Sensors Capture The Faces You Make Inside Your VR Mask
SHARE

The whole point of virtual reality is to offer up the feeling of being transported into another world. But currently, avatar facial expressions fail to live up to VR’s promise. Interacting with another human in a VR realm has more closely resembled a conversation with Zoltar, the infamous robotic fortune teller.

Thankfully we live in an era when researchers, such as those at the University of Southern California, are able to commit time and resources into eliminating this problem. ArsTechnica reports that the team of computer scientists used the Oculus Rift headset to create a tracking system that detects a wearer’s facial movements, then replicates them in VR in almost real-time.

The system consists of two parts. Eight strain gauges are placed inside the headset’s foam liner to detect facial movement in the area that’s covered by the Oculus itself. The second component is a 3D mapping camera that’s mounted to the headset and pointed at the lower half of the user’s face.

A brief calibration, where the user wears only the headset’s foam liner with strain gauges, is required for the most accurate experience. For the calibration process, the team takes measurements with another system (it’s unclear what that setup consists of) while the user makes goofy faces. After the user’s face is fully mapped, he can then put the entire headset back on and start interacting like a normal human being instead of a robot.

Here’s a video of how it works:

Oculus Rift is currently scheduled for release early next year, very likely without this cumbersome system in tow. Which is disappointing, sure, but I think we can all agree the tech needs to be shrunk down and refined a bit before it ends up on our faces.