It might seem like the problem with cell phone cameras stems from packing an imaging apparatus into such a small space. But California startup Pelican Imaging wants to up your cell phone's image quality by packing 25 cameras into the same space. The company claims not only does their camera-array technology produce better image and video quality from a slimmer overall device, but it also does some things that a conventional single-lens camera cannot.
Pelican's tech works by essentially taking 25 different photos and stitching those small photos into one larger image. Without getting to deep into the details, this improves the image quality by capturing more light on more sensors rather than spreading the limited light that those tiny cell cam lenses collect over a single high-megapixel sensor (this is one reason why your cell camera images are so dark). Those 25 images all overlap, so the software has a lot of pixel data to work with when it stitches your image together.
But that's just the beginning of the potential for this innovative little concept; its so-called "light-field photography" also lets you manipulate an image before and after you take it. And we're not talking about adding cheap Hipstamatic filters to an image; you can adjust the focus after the fact to sharpen your subject or to blur the background.
The multiple lenses also add a degree of 3-D depth, allowing for whole new applications. That doesn't mean it simply lends itself to deeper images and video, but also to features like gesture control. So that front-facing camera in your tablet or phone could at some point in the future allow for a touchless interface.
On top of all that, the Pelican assembly is actually smaller than the conventional single-lens camera assemblies in many cell phones. That means Pelican's technology could actually trim some bulk off your next pocketable device. Even if it doesn't, at the very least it could help you fix the focus on all your gritty concert pics.
Now THIS is something worth being excited about. I wonder what the feasibility is for including such an imager into new cameras, alongside their BSI CMOS sensors... Unless of course such a sensor exceeds our currents sensors' abilities for shooting excellent pictures. Not having to worry about focus until post-production would be fantastic!
"Without getting to deep into the details"...
Should be, "Without getting *too* deep into the details".
Bushmaster6.8 beat me to it.
Can someone explain how this is better than a larger sensor in general?
How does many smaller sensors resolve more light than a larger sensor with equal pixel density?
Does taking a large mirror and cutting them into smaller pieces and re-arrange them such that the smaller array of mirrors reflect more light than the uncut larger mirror?
All of this is assuming that the ISO sensitivity remains constant.
Seems to work for radio telescopes, at least. = ) Worth a trip to Wikipedia, I think....
But I think it has to do with sampling over a larger surface area than a single lens or focal point would allow.
"I wonder what the feasibility is for including such an imager into new cameras, alongside their BSI CMOS sensors... Unless of course such a sensor exceeds our currents sensors' abilities for shooting excellent pictures."
I think that's the intention (that these would have a higher resolution.)
Why this is something!
can't wait till be able to put my hands on it.
- do u think they'll release it so we can twick our phones?
As far as I know, an array of radio telescope works is because the satellites are spread over a very large area as opposed to a single telescope occupying a smaller space.
This camera promises a smaller overall area: "the Pelican assembly is actually smaller than the conventional single-lens camera assemblies in many cell phones"
Everyone comparing radio telescopes to digital cameras are comparing apples to oranges. Radio wave imagery uses completely different technology than light-wave imagery, so applying a technique from one to the other is an exercise in futility.
That being said, this idea makes perfect sense. If you capture an image using one big lens, you've limited yourself to the exposure elements of that one picture. If you capture an image using 25 smaller lenses, you have effectively (with the right software) combined the potential for 25 different sets of elements combined into one picture. Genius idea, really. Cheers.
Radio IS light, the type of 'antenna' is just different between the two. A simple comparison would be that many radio telescopes use parabolas, as do reflecting telescopes (and refracting scopes for that matter).
We know that spectrums of light that we can't see are directly visible to other creatures; a radio antenna is just an artificial eye tuned to the wavelength of light it is designed to detect. Algorithms make the light or radio visible, a translation, an approximation.
There are no doubt a great variety of techniques that are similar between the two, and the two learn from each other all the time.
I imagine an insect's compound eye as a mass of fixed-lens light sensors, with little to no muscular control to effect any type of focusing ability, like the muscles in our iris do for changing the shape of our lens, and so images for compound eyes are resolved and reinforced by objects overlapping in the field of view of its successive eyes. Focus is a measure of overlap.
At least that's how I imagine it, dunno if it's true. I think I understand how the crisp focus of an image can be shifted after the fact, but I can't describe it.
One thing it would mean is an end to waiting for your camera to auto-focus or risk getting a blurry image. Instant imaging, no need for focus, shutter speed and iso settings would be relatively instantaneous when you trip the shutter button... Sounds to me like a great addition to photography, for everyone.