We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
This year has been one of refinement for flagship smartphones. In August month, Samsung announced its Galaxy Note 9 and, while it’s powerful and has a few interesting new hardware tweaks—including a liquid cooled processor—it didn’t exactly revolutionize the Galaxy universe. Then Apple announced the new iPhone XS models, which provided a similar refinement to the iPhone X that came before them.
Now, we’re nearing the last stop on the 2018 new flagship smartphone train with Google’s Pixel 3 and Pixel 3 XL. And like its competitive siblings, the Pixel 3 doesn’t disrupt that trend. There are changes and new features, of course, but if you’re expecting a profound smartphone revolution, better luck in 2019. What we’re left with, however, is an excellent offering from Google and one of the best Android phones around—mostly thanks to its impressive camera.
What is it?
The Pixel 3 and the Pixel 3 XL are Google’s own hardware babies. They follow the Pixel 2 and Pixel 2 XL, which sprang forth almost exactly a year ago.
The form factors haven’t changed much, but the screen sizes have shifted. The Pixel 3 has a 5.5-inch OLED screen, while the Pixel 3 XL extends its display all the way to the top corners of the device, pushing it to 6.3inches while cutting out room for the front-facing cameras in a notch.
Set the Pixel 3 down next to an iPhone XS Max and it’s easy to get them confused unless you notice the microphone slot at the bottom of the Pixel. In other words: 2018 phones have a“look” and the Pixel adheres to it rigorously.
Is it really “the best camera ever?”
Let’s get this out of the way first: I really do think the Pixel 3’s camera is the best I’ve used on a smartphone. It’s won’t replace a DSLR for anyone who knows how to use one, but the Pixel 3 is an excellent all-around imaging device that genuinely impressed me at times. Sure, it frustrated me at others, but the AI and computer processing that cranks away every time you take a photo feels like the future of cameras, at least outside of the hardcore enthusiast market—even if all that AI sometimes gets in your way by “fixing” something you tried to screw up on purpose.
Goolge has continued to press the concept of computational photography. Instead of trying to squeeze every last possible bit of quality out of tiny camera modules using traditional methods, Google is using that single rear-facing camera to capture as much data as it can and then crunching it all together to make an image that looks good, even under bad circumstances.
Photo features
Last year, Google introduced its Pixel Visual Core technology, which has a dedicated chip to process image data. In the Pixel 2, every time you pushed the button to take a picture, the camera would snap 10 individual photos and then mash the information from all of them together into a single image. It underexposed some of them to keep the highlights from blowing out, while it overexposed others to bring out details the shadows. It compared the photos to look for digital noise that shouldn’t be in the photo. Not only was it looking for mistakes you made, but it was trying to compensate for the physical boundaries of digital camera gear in general. Google calls it HDR+.
High dynamic range images sometimes look unnatural and cartoony (I find the Samsung Galaxy cameras the worst offenders in the smartphone world, but that’s also due in part to their AMOLED screens). Apple has started doing a similar thing with its photos in the iPhone XS line, which also now has a dedicated image processing chip. On the whole, however, I tend to like the look of the Pixel 3 images better because they seems slightly more natural right out of the camera.
Pixel 3 also adds a new low-light shooting feature called Night Sight, which goes beyond typical HDR to take even more frames with every shutter press. Night Sight captures up to 15 images, some of which are long exposures up to 1/3 of a second to let light soak into the sensor. It’s nearly impossible to hold a camera steady for that long (human hands start showing signs of shake around 1/30th of a second) so the Pixel uses its internal motion sensors to track how your hands shake and corrects for it.
Night sight works, to an extent. One thing that typically suffers in dark environments is color performance because more digital noise hampers tone reproduction. Cameras have made big strides like this in recent years—have you noticed how much better low-light scenes look in movies and TV shows lately?—thanks to improved camera tech, and Google is doing a somewhat impressive job using computational photography.
Google outright said this is a solution that would help you never have to use your smartphone “flash” again, which is good because the included LED light source on the camera is, well, bad like every other smartphone flash that has ever existed.
After the shutter fires
Google’s helpful AI doesn’t quit working when you take the picture, either. The Top Shot feature kicks in when it detects a face in the scene. If you have the “motion” feature enabled (which is similar to the iPhone’s Live Photos, which provide short videos along with your still images) it will analyze the other photos it took and try to find one where the person is smiling and not blinking. It will then suggest that you replace the shot you took with the good one.
I got mixed results from this, as well as some mixed feelings. One of the fundamental parts of photography is deciding which shots you show to the world as your finished work. We used to do it by making prints of a sheet of negatives (called a contact sheet) and then selecting the one we think looks best. Now, the AI robots are helping in that process and it can be hard to argue with them. After all, the AI is comparing your photo to nearly a hundred million reference photos and telling you which one is best, so who are you to argue?
Top Shot ultimately seems like an extension of what Google has been doing in its Google Photos app for years. It tries to find your “best” photos and brings them to the surface. This is even present in Google’s other products like the Home Hub, which acts like a digital photo frame, but only tries to display the highest-quality images from your library as determined by an algorithm.
While these are both examples of AI in action, it’s an important distinction that one happens while you take the picture and the other happens afterwards. They’re both guiding our perceptions of what it means to take a “good” picture.
Portrait mode
Blurry backgrounds in pictures of people are hot right now thanks to the proliferation of portrait mode and Google equipped the Pixel 3 with the next generation of that tech.
Unlike the iPhone, Google doesn’t have a telephoto lens to use for its portrait mode. Typically, with a DSLR, you’d pick a telephoto lens for a portrait because it won’t show the same kind of distortion on a person’s face that a wide-angle lens would. The iPhone and other smartphone cameras have a specific telephoto module for this purpose, but those cameras come with drawbacks. The sensors are typically smaller, which makes them create more noise in the images, especially in low-light. And the zoomed-in field of view makes it hard to take a photo without motion blur.
Google, however, stuck with a single main camera for the Pixel 3. It’s using what it calls dual-pixel tech to capture distance information with just a single camera module. On the whole, I found the Portrait Mode on the pixel more subtle than it is on other phones like the iPhone—and I prefer that. Right now, I see way too many overdone Portrait Mode images that look like a blurry mess and, while you can abuse the privilege on the Pixel 3, it’s harder to do and more natural looking.
Even when you adjust the amount of blur on the Pixel 3—a new feature in this model—the difference between maximum and minimum effect is clearly more subtle than the smeary backgrounds offered by other phones.
Zooming
The last bit of AI magic Google’s algorithm gnomes perform inside the Pixel 3 is Super Res Zoom, which allows you to give the appearance that you were closer than your wide-angle lens lets on.
Interestingly enough, this feature actually relies on your shaky hands to work. When you zoom in and take a picture, the camera takes several photos, each of which has a slightly different view because of the small shakes in your hands. The camera then compares that data and uses an algorithm to fill in more details about the scene than you’d get from a single shot.
The fact that it needs that camera shake to work is fascinating. In fact, if you shoot zoomed in on a tripod, the Pixel 3 actually uses the moving parts in the lens to give you slightly different perspectives so it can do its comparison.
Digital zoom has a bad name in the camera world, and for good reason. It typically results in a degraded image with more noise and visible artifacts that make the photo look jaggy. Those things are true here, but Google has done an above-average job of smoothing that over.
I know some camera enthusiasts who are still bummed about the lack of a true telephoto lens options, but considering the downsides that come with it (smaller sensor, noisier images), I’m OK with digital zoom in this case. It still isn’t perfect, but if you’re posting photos on Instagram, you have to zoom really far before anyone would even start to notice.
Front-facing cameras
While the rear-facing cameras are the most interesting part of the device for taking pictures, the wide-angle front-facing camera is a feature I found myself appreciating more than I expected. In addition to the typical front-facing camera, the wide lens offers a much bigger field of view. So, if you want to make a video of yourself talking while capturing other things happening in the background, this is a great way to do it. What about the rest of the phone? In my experience, the assessment that this phone is mostly a camera rang fairly true. The experience of using the Pixel 3 is a lot like using the Pixel 2. It now charges wirelessly (even through a case!), and the screen is noticeably different, but it ultimately works like a flagship phone.
Who should buy it?
At this point in time, I still think it’s kind of crazy to buy a smartphone simply because it has the “best” camera. If you can’t take a good picture with a modern smartphone camera, then it’s a matter of skill and understanding how pictures work more than it is about the hardware. But, I like the Pixel 3 a lot. In fact, I’ll have to think hard when it’s time to upgrade about whether or not this is the device that finally makes me jump ship from Apple for my personal device. And right now, the Pixel 3 is the best Android phone around. At least until the next one.