Leia in Your Living Room: Projecting a Star-Wars-Style Hologram With a Microsoft Kinect

How MIT's Object-Based Media Lab re-created Princess Leia's holographic speech with a Microsoft Kinect

Share

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Michael Bove, the director of MIT’s Object-Based Media Group, got his grad students a Kinect for Christmas. The range-finding, motion-sensing camera add-on to Microsoft’s Xbox 360 game system turns the human body into a controller, but Bove’s students did something far more amazing with it. “A week later,” he says, “they were presenting holograms with it.” The students had hacked the Kinect, and found that it was a perfect tool for capturing images to project in three dimensions–in other words, for holograms.

(Oh, and a quick note about “holographs:” The word “holography” refers to the technique, and “holograms” are the results of it. “Holograph” is often used as a synonym for “hologram,” but as Bove used the word “hologram” during our conversation, that’s what I’ll use here.)

Home holography video chat may sound like the stuff of Star Wars, but it’s closer than we think. Holography, like traditional 3-D filmmaking, has the end goal of a more immersive video experience, but the tech is completely different. 3-D cameras are traditional, fixed cameras, which simply capture two very slightly different streams to be directed to each eye individually–the difference between the two images creates the illusion of depth. If you change your position in front of a 3-D movie, the image you see will remain the same–it has depth, but only one perspective. (Curious about glasses-free 3-D? Check out our interactive primer.) A hologram, on the other hand, is made by capturing the scatter of light bouncing off a scene as data, and then reconstructing that data as a 3-D environment. That allows for much greater immersion–if you change your viewing angle, you’ll actually see a different image, just as you can see the front, sides, and back of a real-life object by rotating around it. “If holography is done right,” says Bove, “it’s really quite stunning.”

Capturing that scatter of light is no easy feat. A standard 3-D movie camera captures light bouncing off of an object at two different angles, one for each eye. But in the real world, light bounces off of objects at an infinite number of angles. Holographic video systems use devices that produce so-called diffraction fringes, basically fine patterns of light and dark that can bend the light passing through them in predictable ways. A dense enough array of fringe patterns, each bending light in a different direction, can simulate the effect of light bouncing off of a three-dimensional object.

The trick is making it live, fast and cheap. It is one of the OBMG’s greatest challenges: the equipment is currently extremely expensive, the amount of data massive. “[We’re] trying to turn holographic video from a lab curiosity into a consumer product,” Bove says. They’re getting close. Using the Kinect, which costs just $150, and a laptop with off-the-shelf graphics cards, the OBMG crew was able to project holograms at seven frames per second. Previous breakthroughs, both at MIT and at other institutions like Cornell, could only achieve frame rates of one per every two seconds–far slower than the 30 frames per second required for movies or the 24 frames per second required for television. A week later, the MIT students had gotten the rig up to 15 frames per second. Bove says that’s far from the limits of the Kinect hardware. The next step is to bring down the cost of the holographic display.

The current holographic display is a sophisticated acousto-optic modulator, a device that diffracts and shifts the frequency of light by using sound waves. But the OBMG is hoping to replace the modulator, a one-of-a-kind, highly expensive piece of equipment which was pioneered by Bove’s predecessor Stephen Benton, with a consumer model they hope will be able to be manufactured in the near future for a mere few hundred dollars. In a matter of years, truly live holographic video chat could be wholly possible. Princess Leia’s holographic plea for help? Child’s play. After all, it was pre-recorded.

The challenge with real-time holographic video is taking video data—in the case of the Kinect, the light intensity of image pixels and, for each of them, a measure of distance from the camera—and, on the fly, converting that data into a set of fringe patterns. Bove and his grad students—James Barabas, David Cranor, Sundeep Jolly and Dan Smalley—have made that challenge even tougher by limiting themselves to off-the-shelf hardware.

In the group’s lab setup, the Kinect feeds data to an ordinary laptop, which relays it over the Internet. At the receiving end, a PC with three ordinary, off-the-shelf graphics processing units — GPUs — computes the diffraction patterns.

GPUs differ from ordinary computer chips — CPUs — in that their circuitry has been tailored to a cluster of computationally intensive tasks that arise frequently during the processing of large graphics files. Much of the work that went into the new system involved re-describing the problem of computing diffraction patterns in a way that takes advantage of GPUs’ strengths.

Home holography is a pretty incredible technology; it would have the potential to totally change the basic way we use displays, from media to chat. His path, by using cheap, easily found equipment, could be the fastest way to a holographic future, and it’s especially thrilling that the Kinect, a $150 video game accessory most often used to teach the Soulja Boy dance, is a major component in making that future possible.