This year, for the first time, a camera has been awarded PopSci's Innovation of the Year. The Lytro Light-Field Camera, a $400 gadget that allows photographers to re-focus pictures after they're taken, is the product of a decade of work from Ren Ng. As a computer-science grad student at Stanford University, Ng saw potential for a consumer camera in a light-field setup, which then necessitated a room-filling array of lenses--already the product of a century of light-based physics research. About 10 years later, his company has introduced a personal shooter that could be the biggest change in photography since the digital-image sensor.
Corinne Iozzio (PopSci): Your first exposure to a multi-lens setup wasn't until you were a graduate student at Stanford. Before then, you seemed set on a straight academic path. What made you change gears into creating a consumer product?
Ren Ng: The first light-field camera array I saw at Stanford had a bunch of applications, like to do special effects like you see in the Matrix where you spin the camera around in frozen motion. It took up an entire room. Looking at that, I realized I was more passionate about the camera for a person rather than research. I said, "I think this should be done in the body of a single camera rather than a room full of cameras."
PS: That wasn't as simple as just taking that setup and making it smaller, right?
RN: We had to spend a lot of time in the library and dedicate a lot of pencil-and-paper time. I worked with a lot of professors, including one who taught a course called The Physics of Photography. We asked overselves, "What modifications can you make to a camera sensor to capture the light-field data?" Eventually the path that we hit on was to use micro-optics layered in front of the sensor, and not touch the sensor itself. I started working with the professors in the computer-graphics lab to build different prototypes of a camera and to make software that would work with the hardware.
PS: That's it, you just dove in and started building your first prototype?
RN: Well first, we made a computer simulator that would track the pixel values that would come off the sensor. In CG we'd have a 3-D model of a dragon statue; we'd replicate the path of a single visual line extending away from the simulated camera and that would replicate the data you'd get from a real-life capture. Once that software modeling was done, the theoretical side was put together.
PS: Tell me about the first prototype you built.
RN: Mark Horowitz and I built it onto an optical bench in the lab. We spent and eight-hour span putting this optical light path together. We clicked the shutter for the first time late on a Friday night. One of the first pictures we took through the optical path we'd created (primary lens, micro-lens array, image sensor, processor) was of me on the other side of the bench with a regular camera taking a picture of Mark taking a picture of me. Within the modeling software, we could focus from the lens, then back to my eye. That was a really big step.
Still, my goal was to build a light-field camera we could carry around. So I wrote up a grant and got money to buy the parts and start a supply chain. It went smoothly for about nine months, and in 2004 in Berkeley, I had gotten all the parts together. I glued them together in an apartment, breathing all those epoxy fumes. And finally I screwed the whole thing together.
PS: That couldn't have been easy--breathing in all that epoxy.
RN: Yeah. The separation of the components is precision stuff--a half-millimeter is critical. Bring the pieces too close and you might cause the glass to collide with the sensor. Remember, I was doing all this on my kitchen table. Finally I finished it and went to bed. I got up and took a picture of a few mugs and some Spanish corn from the market and dropped it into the simulation software with the light-field picture data. I could refocus from one mug to the next on the kitchen table. I took the camera down to Stanford one evening soon after and took a picture of all of my friends in a row at this fondue party, and that's the picture that went into the technical report we wrote about the camera.
PS: So you were off and running at that point?
RN: I really needed a big push to get the company started. Minu Kumar, who had been my roommate at Stanford, was in the office right next to mine in the Computer Science building. He kept cajoling me to get the company started. He walked me through the process of getting a company started and helped write a business plan. Last year, we raised enough to commit ourselves to the consumer-electronics plan. In May, Mark Horowitz also joined our investment group and gave us a big boost—to $15 million. The company has grown from 4 to 60 people. We are targeted specifically at the consumer camera world.
The above image is interactive: click around to change the focus!
PS: The camera space is so crowded, how can a small startup hope to compete?
RN: Light-field photography is a transformational technology that needs a transformational product to introduce it. For the first time, we have a light-field camera that's going to be for everyone--not something in a huge room in a research facility.
Yes, we are a producer of cameras, but we understand that at the end of the day, you have to make photos in software. A lot of companies focus on the camera side, and a lot are on the software side. There's a chasm between the two. That damages the ideal customer experience. People don't realize how much is lost between software and device. We make both.
PS: Do you think the consumers really know what they're losing in the device/software divide?
RN: I think it's really a big transition. Camera 1.0 was about film. Then there was the transition to digital (Camera 2.0), which took a long time, maybe 15 years after the market initiation. We're working on Camera 3.0, which is about an entirely new kind of data. The data with 1.0 and 2.0 was the same--a single flat image. Camera 3.0 is about a new type of data that's more powerful, one that can capture all these missing dimensions of information. Eventually, the entire market will look like the light-field camera. Light-field photography will be a large ecosystem that will touch the consumer day-by-day in lots of different ways.
PS: So you'd consider selling the light-field technology to other camera makers for them to adopt into their own lines?
Never say never. For right now our strategy is to bring the first consumer light-field camera to market. We really think there's a lot of broad-based appeal for new technology in cameras. We're not designing for professional photographers, but I think they're going to love this. They're gonna add it to their bag of tricks to create an entirely new type of picture that they can't get any other way.
PS: Funny you bring up professional photographers, because I'm sure you know that they may be prone to inherently dislike a product that may make some of their expertise obsolete. How would you respond to that?
RN: I think that a professional photographer's value is not about being able to focus the camera. Being able to take a really compelling picture is really a multi-faceted thing--from composition to lighting, to working with subjects.
The light-field camera provides new artistic possibilities and creative avenues. You can compose in 3-D. For example, there's a picture on our Web site, a Richard Koci Hernandez picture of pink flowers; the person in the back is really out of focus and you can't see what she's doing, until you click to pull the background into focus. You, the viewer, engages with a picture and becomes able to discover something new. This is a new challenge compositionally.
PS: It's the still-camera equivalent of focus-shift in a movie.
PS: You've reached your stated goal and made a light-field camera you can hold in your hand. But its final images don't rival other still cameras in size and resolution. Is that a concern for Lytro at all?
RN: Today, we've reached a point that is so centered on megapixels that the numbers don't make sense anymore. In light-field, we're focused on megarays of light. We can take a piece of silicon from a manufacturer and field it out into hundreds of megarays that make up the final images. For consumers, their pictures will become more immersive, with greater depth, high dynamic range, better color. The megapixel marketing and mythology is so dysfunctional; it clouds the most-important issues to photographers. At Lytro we focus on one question: what is the quality of the experience as a whole?
I am very excited to get my hands on this. I met Ren Ng at photo tradeshow a few years ago, and he is one of the smartest and most personable people you will ever meet.
The Lytro truly is revolutionary. I know that the scientists I work with at the New York Botanical Garden (digitalphotorepro.blogspot.com) can't wait to try out this camera for macro photography of plant structures.
I was never very interested in photography. I never liked having to carry a separate, bulky device on the off chance I saw something that I wanted to remember. That was before the Smartphone revolution. Now I have a nice 8 megapixel point and shoot that goes with me everywhere. I think the quality from my phone is ok. Easily enough to suffice for my occasional needs. This new technology really sparks my interest. Pictures have always been boring and stagnant to me. Now being able to shift focus to anything captured in the frame brings a lot more life and interactivity to the pictures making them all the more memorable. I hope one day it is refined enough to be affordable to everyone and possibly included in a smart phone. I still hate carrying additional devices.
I might be interested if the software lets me adjust the apparent depth of field, then save a static image file for printing. I don't want to view my photos on screen only.
I agree with fungus33. I am hopeful that there is something like the Photo Merge function in Adobe Photoshop that allows me to shoot at f2.8 but have infinite depth of field. The Lytro is obviously geared toward consumers at first. I'm sure that as time goes on there will be all kind of new creative applications built around the technology.
I took a picture with a long depth of view. It was clear close and far.
I then split the picture in half and blurred one half on the left and rejoin the picture.
I then split the picture in half again and blurred one half on the right and rejoin the picture.
I did this on my computer. It look good.
Now just put this in the firmware and you have the above camera.
Science sees no further than what it can sense.
Religion sees beyond the senses.
You know where I see this being big? If it's combined with eye-tracking software. Auto-refocusing to wherever you're looking on a screen will seriously enhance the 3D illusion. Imagine this with actual 3D screens...the illusion would be perfect. You'd get depth info from stereoscopy as well as focus considerations. I can't imagine any non-volumetric display seeming more 3D than that.
Even in 2D, an image that auto-refocuses based on eye tracking would be amazing and extremely natural and intuitive to work with. I want it.
We're seeing some truly incredible breakthroughs in camera technology.
We're into another golden age of photography, again. Just look at what camera technology has become:
From the Iphone 4S's big push advancing the optics and HD video on mobile devices...
...Mirrorless cameras hitting the market...
...to last weeks announcement that the new Nikon D4 (a great dslr for hd video) and when using the WT-5 wireless dongle it can be controlled via an iPad, remotely.
Could two of the Lytro cameras be paired at interocular distance to produce 3D ?
Wouldn't that solve many of the issues with current 3D cameras and movies, in terms of both capturing and displaying the imagery?