Flexible Focus
SHARE

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Digital cameras keep packing in the pixels, but they can’t hide the truth: Photos are flat. Now, engineers at Stanford University have developed a way to bring 3-D clarity and depth to the world of 2-D photography.

A traditional camera uses one main lens to focus a scene on a light sensor, forming a single 2-D image. Stanford’s camera divides the light sensor into 12,616 small clusters of pixels. Each cluster is topped with an 11-micron lens that sees the scene from a slightly different angle than its neighbors. A computer compares the overlapping images and builds a 3-D map of each pixel in the scene.

Unlike other multi-lens setups, this array doesn’t require periodic calibration, making it useful for a variety of applications, says Keith Fife, the lead developer on the project. Robots could use the maps to find their way around, or 3-D face recognition could replace your car keys. But the most immediate payoff will be for the Photoshop jockeys. With triangulated coordinates for every pixel in a scene, you can isolate objects at different distances, so bringing any object into focus or cutting out backgrounds would be a cinch.

Fife’s team has built a prototype for specialized testing but is still perfecting the design of the microlenses. Fife predicts that a 3-D camera could hit stores by 2010.