Inside the Scene-Stealing 3-D Technology Behind James Cameron’s Avatar

The Na'vi
Courtesy WETA/Twentieth Century Fox

James Cameron is stubborn. He decided nearly a decade ago to film his humans-versus-aliens sci-fi adventure Avatar in 3-D, but he refused to start production until technology could convince the viewer that he or she could step through the screen and pick up a bow alongside the Na’vi, the film’s 10-foot-tall, blue, cat-faced alien protagonists.

To give scenes realistic depth, Cameron, who brought a computer-generated liquid-metal T-1000 to life in Terminator 2, and camera whizzes Vince Pace and Patrick Campbell built the Pace/Cameron Fusion Camera System to capture images the same way as a human eye does. Cameron then used a virtual camera to walk—or fly—around in the virtual world to record any shot of the Na’vi that he wanted and combined that with the real-life footage. Here, a guide to making the most convincing 3-D film yet (See also our review of the film here).

Build The Stage

How James Camerson Made a Truly Lifelike 3-D Movie

1. Build the Stage
An array of 72 to 96 cameras, depending on the size of the set, hang around the perimeter of a sound stage and are configured in a grid. Later, a computer replaces the studio walls, floor and ceiling with digitally rendered three-dimensional environments and structures. The grid is also marked on the floor to provide reference within this virtual world.

2. Capture Motion
Actors, weapons and props marked with reflective dots move around the stage while the camera grid tracks only the dots. A computer records the dots’ movement, triangulates their location, and assembles these data points into wire-frame skeletons that in Avatar will be “dressed” with computer-generated Na’vi bodies.

3. Shoot in 3-D

Capturing Motion

Next Cameron films the flesh-and-blood characters in 3-D so that they will look at home alongside the Na’vi in the virtual 3-D world. Older 3-D tech used two cameras mounted side by side to create a left eye/right eye effect. Because of their bulk, those cameras were placed far apart and could shoot only straight ahead. The Fusion Camera System has two cameras, but by using small high-definition digital image sensors, the lenses can sit closer together than your pupils. The line of sight of the lenses is adjustable so that, during a shot, they can be angled closer together to focus on nearby objects, or farther apart for those in the distance, just as your eyes do. The system combines the images into a single image with realistic depth.

4. Climb into the Movie
After a computer inserts the motion-capture performances into the digital environment, Cameron carries a virtual camera—an LCD display with buttons and grips similar to a videogame controller—onto the set. As he moves, radio and optical detectors track the camera’s location and relay it to computers offstage, which render the virtual world as viewed from that vantage and send it to the tablet. This allows Cameron to walk through the virtual action to record any shot he wants—he can even set the vantage point to take shots that would require a crane or helicopter. Later, the 3-D footage of human characters can be added to these scenes.

5. Watch It
At RealD 3-D shows, a projector alternately displays the left-eye and right-eye images, each in an oppositely circular polarized direction, 144 times per second. Polarized glasses ensure that each eye sees only the image meant for it.


PopSci Interview: James Cameron

Behind the 3-D magic is a director who won’t let even the laws of physics get in the way of an epic story

Science Advisers are Annoying:
I have just enough of a science background to get me in trouble. When I’m writing, I’m thinking: What can cause a mountain to float? Well, if it was made out of an almost-pure room-temperature superconductor material, and it was in a powerful magnetic field, it would self-levitate. This has actually been demonstrated on a very small scale with very strong magnetic fields. Then my scientists said, “You’ll need magnetic fields that are so powerful that they would rip the hemoglobin out of your blood.” So I said, “Well, we’re not showing that, so we may just have to diverge a little bit from what’s possible in the physical universe to tell our story.”

But Sometimes Scientists are Useful:
I wanted to put Pandora in the Alpha Centauri star system, but we haven’t found any large planets there. One of my astrophysicists said, “Well, if a planet’s ecliptic was inclined at 60 degrees to our line of sight, then the Doppler method would not work because the planet would perturb [the star] Alpha Centauri A or B on a different axis, and so we wouldn’t be able to see it. You wouldn’t be able to see it using the transit method, either.” So there might be planets there. But you can only have stable orbits out to about 230 million miles from Alpha Centauri A, so your planets have to be close in, blah blah blah. So we went through the steps of creating two possible solar systems there, because it’s a binary star, and gussied it up with technical research.

Audiences Will Like it Anyway:
My goal was to tell an epic story with visual power and to impress the crap out of the audience, like my goal is every time I make a movie. When it comes to the science behind the camera, what it took to produce the images—I think the viewer likes the idea that they’re being shown something new, but I don’t think they really care how you did it. I mean, I’m happy to talk about it, but I don’t think it sells the damn ticket.