Researchers from the University of Erlangen-Nuremberg, the Max Planck Institute for Informatics, and Stanford University have developed a new method for “real-time facial reenactment.” This means that you can have one person making faces or mouthing words, and those expressions and movements are simultaneously displayed on live video of the face of someone else.
Previously, we’ve seen video technology that could (sort of) graft a celebrity’s face onto your own in real time, but this system, moving your own facial features around under someone else’s guidance, is arguably creepier
“Facial reenactment,” as the paper notes, is a bit more difficult than just mapping expressions onto a computer-generated avatar, which is known as re-targeting. “Reenactment is a far more challenging task than expression re-targeting, as even the slightest errors in transferred expressions and appearance and slight inconsistencies with the surrounding video will be noticed by a human user,” the study authors write.
They solved most of the problems by tracking the geometry, reflectance, and texture of the faces for both the source actor (who’s making the expressions) and the target actor. To deal with pesky details like teeth, they simulate what the interior of a mouth would look like, complete with teeth proxies. The results are composited to create a pretty realistic video. The technology does still have minor shortcomings. For example, waving a hand in front of the face could lead to freaky distortions, and glasses also pose a slight disruption in the image.
You can only begin to think of the possibilities: You never have to smile again, as long as there’s someone else smiling for you; you can put a beard on your face without having to grow it; and of course they could do something useful like make your mouth movements match dubbed dialogue. Watch the video below for a full explanation of how the technology works.
[via Joshua Topolsky]