How It Works: Upscaling 2-D Video to 3-D
SHARE

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

More than a year after the first consumer 3-D-ready HDTVs were demoed at CES, the next generation of sets are going on sale this week. But, aside from the new TVs, glasses, and Blu-ray players, the question of content remains. While there are already brand partnerships with networks like Discovery and ESPN, that’s just the tip of the iceberg. As an alternative, the two companies with 3-D TVs but without major brand-name cable partners (Samsung and Toshiba) showed off sets that could convert 2-D video to 3-D in real time.

The converter chip on Toshiba’s Cell TV is part of their own platform co-created with Sony and IBM (yes, it’s the same brain inside the PS3). Other HDTV makers, though, can go to a third party to upgrade their sets to upconvert 2-D content to 3-D.

One such third party is chipmaker Quartics, who provides the graphics processing brains behind everything from netbooks to HDTVs and set-top boxes. And this year at CES they demoed their own 2-D to 3-D upconversion chip technology.

Quartics CTO Mohammed Usman gave us a look at the guts behind converting 2-D video to 3-D. In essence, a series of algorithms on Quartics’ chip is watching the video along with you, and analyzing it on the fly. With virtually no delay, it can distinguish foreground from background and identify the subject of the shot as the object that needs added depth. The process is very similar to face recognition algorithms used by digital cameras and camcorders to autofocus on faces, and sometimes know whether or not they’re smiling.

To upgrade 2-D to 3-D, the software thinks of the colors it sees at the bottom of the screen as what’s closest to your eye, and what’s at the top as the farthest. This is how it establishes what the background of the scene looks like.

But what about the subject? The chip tracks the pixel color and light intensity of groups of pixels together; when it senses a sudden shift in light or color, it knows it’s encountered a new object. The chip also knows that moving clusters of color or light are likely to be the subject of the shot. Once it’s identified the objects, it finds a central point from which to draw lines of perspective–the same way we learned vanishing points and two-point perspective in elementary school art classes.

With on-the-fly conversions, however, you won’t see many effects that jump off the screen. Currently the algorithms are not fine-tuned enough to re-create the immersive depth needed without being distracting or gimmicky like old-school movie-theater 3-D.

Once the chip knows what objects on the screen to assign depth to, it can start the process of converting the image. The chip creates two separate images, one for each eye, which is flips back and forth at high frequency to trick your eyes into thinking you’re seeing both angles at once.

When paired with a set of 3-D glasses that isolate the left and right images from each other at the same frequency, the effect is impressive, but not as realistic as video shot with a two-lensed stereoscopic 3-D camera (or animated originally in 3-D).

At CES Quartics demoed nature videos and the trailer for Appoloosa, but Usman is confident their technique will work with just about any genre. Film with a lot of movement, though, is trickier and may require more sophisticated algorithms–or maybe you’ll have to wait for native 3-D footage before the Super Bowl in 3-D will be livable.

Samsung’s in-booth demo, for one, showed an upscaled football game. And, while the holographic effect did come across, a wide receiver dashing across the screen had a bit of a ghost trailing behind him–as predicted. Still, upconverting existing content to 3-D is better than waiting around for the first wave of Blu-ray discs or cable broadcasts. And, since the chips are no more costly than those on other 3-D sets, it’s the perfect stop-gap.