The Dot Matrix
by Warner Bros. Ent.
Technology: Motion Capture
Nominee: Julian Morris | Vicon Motion Systems
Credits: Blade: Trinity, Master and Commander: The Far Side of the World, Lara Croft Tomb Raider: The Cradle of Life
To make the movie version of Chris Van Allsburg’s
best-selling book The Polar Express, director Robert
Zemeckis knew that standard computer animation wouldn’t cut it. Too cartoonish. Neither would live-action filming. Too restricted by reality. To tell the story of a boy’s Christmas Eve train ride to the North Pole, Zemeckis envisioned wild action set against a backdrop of “moving paintings”–the book’s lush illustrations brought to life. And so he decided to make a CG film based entirely on the performances of human actors, a movie starring realistic humans who were CG from head to toe. He wanted digital flesh and blood.
Zemeckis and digital-effects supervisor Alberto Menache turned to Vicon Motion Systems, whose pioneering achievements with “motion capture” have won a Sci-Tech Award. Zemeckis wanted a setup with the unprecedented ability to capture entire actors (faces and bodies simultaneously) as they moved around a stage. The technology wouldn’t
be used just as a special-effects enhancement in a particular scene but as the basis for an entire movie. “We were risking a lot on this idea, and we weren’t sure if it was going to work,” Menache says. “Vicon said, â€Let’s give it a shot.’ “
The company’s technology, which dates back to the early 1980s, was initially used to analyze the gait of cerebral palsy patients. Vicon released the film industry’s first motion-
capture system in the mid-1990s, and since then it has been used to help generate CG-human stunt shots in many movies, including Titanic and Spider-Man 2, and to generate monsters like those in the recent films The Hulk and The Mummy. For The Polar Express, Tom Hanks and other actors donned unitards fitted with 80 reflective markers; an additional 152 markers were glued to their faces. Performances for the entire film were captured on
a 10-foot-by-10-foot stage flanked by 72 Vicon cameras, more than twice as many as in any previous system. Rings around each of the lenses beamed infrared light, and as the actors performed, the reflections from their markers were recorded at 120 frames per second and fed into a computer network. Vicon then assembled 3-D data sets of moving points, providing a detailed framework for subsequent computer animation.
To enable natural interplay, Zemeckis wanted as many as four actors to perform at once. This meant that the reflections of up to 926 markers would have to be captured, each with an accuracy of about one millimeter. Vicon’s iQ software was trained to discern discrete bodies from an overlapping flood of points, like spotting constellations in a sky full of stars.
The finished motion-capture data sets were then handed off to artists, who ran the information through a series of simulators and
animated it to create muscle movements, skin, clothing and hair until, voil, they had computer-generated Hanks as, among other characters, a little boy, a train conductor and Santa Claus. The finished movie had the storybook look and breathtaking action that Zemeckis had sought, and at times, the characters were unnervingly real. At other moments, though, their faces looked gray and lifeless. “The results were very good, but we had a lot of volume,” Menache explains. “If we had one character to work on, it would be
absolutely perfect, but we had 28. We will definitely be
trying this again.”