Last year, we reported on the Adobe light-field camera, a prototype device with 19 lenses which captures 19 versions of the same image at different focal lengths. The associated software then lets the user choose which parts of the resulting photograph should be in focus, which can produce a virtually 3D image. We also briefly mentioned a project at Stanford University which is experimenting with their own multi-lensed device.
The Stanford camera goes a few steps deeper by taking the many lenses off the main lens assembly (the Adobe model), and miniaturizing and attaching them directly to the image sensor. This technique means the traditional main lens doesn’t need to be of high quality, as it’s now only a gateway for the lenses on the sensor. The 12,616 lenses together on the chip produce a powerful tool for three-dimensional imaging and modeling. The researchers see robotics as the ideal application for the system, potentially giving machines better depth perception than humans so they can perform delicate tasks. Other uses include facial recognition in security applications and three-dimensional biological imaging.