Google Pixel 2 Camera
it’s one of the best smartphone cameras around, but it has a lot of stiff competition. Stan Horaczek
The camera is one of the most important pieces of any modern smartphone. We use it to shoot photos and videos at important (and sometimes stupid-but-entertaining) moments, but they’re also integral to the burgeoning wave of augmented reality apps that mix the digital and real worlds. Because these imaging devices are so important to users, manufacturers have latched onto them as a point of differentiation to try and make one phone stand out in a sea of otherwise similar devices.
The truth, however, is that pretty much any current-generation smartphone packs a high-quality camera that meets, or probably even exceeds, your overall needs as a typical user. Flagship phones like the iPhone X obviously have an advantage over older and cheaper phones, but the floor on overall camera quality is remarkably high. Here, I’ll break down a few of the technical terms you’ll hear thrown around in advertisements to help sort the bells and whistles from the meaningful features.
Take a look at the current lineup of flagship smartphones and you’ll find that most of them have stuck around the 12-megapixel resolution for a few generations now. That means there are 12 million little light sensors that collect image data that’s translated into a finished photo. The iPhone X, iPhone 8 Plus, Google Pixel, Samsung Note 8, and Samsung Galaxy S9 all boast 12 million pixels of effective resolution. There are some outliers, like Sony’s flagship XZ2, which has 19 megapixels, but the benefits of more data are limited.
Smartphone cameras have small sensors, at least when you compare them to the massive imaging chips inside pro-grade cameras like DSLRs. Cramming a ton of pixels into a smaller space requires that manufacturers use smaller pixels, which typically results in more ugly digital noise in the form of ugly, colorful dots in the image.
While it may sound enticing to have 50 megapixels with which to work, even 12 is technically more than you probably need. Filling a 4K display pixel-for-pixel requires roughly 8 megapixels worth of image data. A full-resolution Facebook image needs just 4 megapixels. And you only need about 2 megapixels to get a full-resolution Instagram photo. Even if you wanted to make a magazine quality print, you can get roughly 9” x 14” from a 12-megapixel file.
In fact, more megapixels typically means larger file sizes, which take the camera longer to process and slows down your burst rate, raises upload times, and taxes your storage.
The opening through which a camera’s lens lets light in is called the aperture, and the bigger it is, the more light can get through at once. It’s a simple concept, but it has a few complex implications, especially when you start getting into the math of it.
We use something called an f-number to represent the relative size of a lens’s aperture. We can’t simply use the diameter of the actual aperture because the same size opening will let in less light through a telephoto lens than it will through a wide-angle lens. The f-number represents the focal length of the lens (which remains constant) divided by the physical diameter of the aperture. It’s complicated, but the takeaway is that the lower the f-number value, the more light the lens lets in.
So, when LG released the V30, it had the widest aperture on the market, with an f/1.7 rating. Before that, lenses hovered around f/1.8. Now, the Samsung Galaxy S9 has set a new mark with an f/1.5 lens. Here are two photos, one taken at f/1.4 and one taken at f/1.8 (on a DSLR). The difference is barely noticeable.
The Galaxy S9 does introduce an interesting variable in that you can choose between two apertures, f/2.4 and f/1.5. The only practical reason you’d want to opt for f/2.4 is if you’re shooting in a very bright setting and you want to use a longer shutter speed to add some motion blur. But again, if you’re regularly doing that kind of thing, maybe it’s time to get yourself a dedicated camera.
Smartphone cameras don’t have room for dedicated focusing sensors like DSLRs, so they put pixels specifically for focusing right on the image sensor. Samsung calls this Dual Pixel, while other manufacturers call it a hybrid sensor.
Compared to the older sensors without these pixels, the new models do focus a lot faster and more accurately, especially in dark situations. However, almost every smartphone camera—especially at the top end—has this tech and the differences in real-world focusing speed are small, if any.
These hybrid sensors do play other roles, though. Google, for instance, uses the dual pixels to help figure out the distance to objects in its field of view to help apply fake blur effects.
Fake blur and lighting effects are the biggest things in smartphone photography at the moment. It’s meant to emulate the look of a “professional” camera with a bigger sensor and, in some cases, it actually succeeds. I’ve used portrait mode on all the major flagship phones they’re all pretty much fine.
Right now, manufacturers are trying to sort out the best way to add this fake blur. The Samsung Galaxy S9+ uses both of its rear-facing cameras to help figure out the distance to an object, while the Google Pixel uses its dual-pixel focusing tech (more on that in a moment) to do it with a single camera.
In the end, however, portrait mode isn’t quite there yet. The edges of your sharp objects often look a little scraggly, and the portrait mode on the iPhones is decidedly more difficult to use, especially in the dark.
It’s also worth noting that all those wonderful portraits that show up in smartphone commercials start off with beautiful, controlled light, which is responsible for most of the impressive image quality. Here’s an analysis of an earlier Apple commercial for Portrait Mode.
Overall image quality
Take a look at this example diagram from Engadget’s most recent test of high-end smartphone cameras. Can you tell the difference between them? Sure, there are little discrepancies, but these are unedited images. As smartphone cameras get more serious, it’s best to start thinking about the photos that come straight out of the camera as raw material. The phone is doing a lot of work to make the pictures look consistent and technically correct, which gives them a specific “look” you’ll start to recognize if you keep an eye out for it. Editing your photos, however, lets you control the aesthetic and helps the photos feel more finished.
I recommend trying an app like VSCO or Filmborn by Mastin Labs to do a quick edit before sharing your photos. Lightroom Mobile from Adobe is another powerful editing tool that lets you adjust things like exposure, contrast, and color balance to make the photo look finished.
Plus, if you’re primarily using things like Snapchat or Instagram Stories, raw image quality starts to matter less once you slap a bunch of effects over it before sharing.
Smartphone camera flashes are universally bad, due in large part to the fact that they don’t actually flash. A dedicated camera strobe uses a flash tube that emits a bright burst of clean, white light for a very small fraction of a second. The “flash” on almost every modern smartphone is little more than a tiny LED flashlight, which has a very short range and plays havoc with people’s skin tones.
If you’re comparing smartphone cameras based on the quality of their flash photography, then you’re probably better off getting a dedicated camera. However, Apple has did a great job balancing out the light from the flash and the ambient light from the scene. This helps negate the effect of a bright person in a black void and gives Apple the slight upper hand in that arena.
4K video is a standard at the moment, but shooting everything in 4K may not even be your best be when it comes to balancing quality, performance, and storage. 1080p video still fills most screens with ease, and some smartphone cameras allow you to capture slow-motion footage in HD if you don’t need to go all the way up to 4K. The Samsung Galaxy S9, for example, shoots at a super-slow 960 fps
Work on your skills
In the end, what we have is a relatively even playing field, which means that your skill when taking photos is what will make the difference. Learning to recognize good light (windows good, overhead fluorescents bad), and compose a shot that emphasizes the most important part of your scene will make the difference. Adding megapixels and other bells and whistles may help you a little, but it’s otherwise like using a wind tunnel to tune your bicycle for the big race, then eating a bag of Cheetos as you ride. The tech can only help so much.