We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
Your smartphone is powered by a chip. The new iPhones, including the upcoming iPhone X, use one called an A11 Bionic, and other handsets, like the Pixel 2, pack a Snapdragon 835. But chips in modern phones are not homogenous pieces of silicon—they have specialized components, or hardware blocks, on them. Because of these multiple elements, processors like these are referred to as a “system on a chip.” One of those blocks is the image signal processor, which takes the data from your camera and makes it into a photograph. Another part of the chip is the graphics processing unit, or GPU, and it’s responsible for an increasing number of your phone’s fanciest features.
The rise of augmented reality and machine learning is bringing the GPU into the spotlight. Don’t confuse it with the CPU, or central processing unit. If the CPU is the boss of your office, the GPU is the accounting department that crunches numbers—and does it quickly.
“The CPU is high-level management control of the phone,” says Kunle Olukotun, a professor of electrical engineering and computer science at Stanford University. “And the GPU is the piece that does the heavy lifting.” And that heavy-lifting is especially suited for one thing. It’s “very efficient at doing computation on large arrays of numbers,” he says. Those calculations are crucial in displaying complex graphics on your phone’s screen.
Take Apple’s new A11 Bionic chip, for example. Apple designed the GPU on that chip themselves. If you swipe up to access the “control center” view from the bottom of the screen on an iPhone 8, you’ll see the background blur—it’s the GPU that’s calculating and producing that blur. Ditto, on an iPhone 8 Plus, the GPU accelerates the process of producing a live preview while using the camera in “portrait mode.” Or, consider the image of a cup produced by an augmented reality app, overlaid on a real view of a desk, seen through your phone’s screen: that’s produced by the GPU.
The CPU, on the other hand, helps accomplish tasks like launching apps, or loading websites.
The same abilities that make the GPU good at rendering images also make it efficient at accomplishing tough AI and machine-learning calculations. “If you look at machine-learning algorithms,” Olukotun says, “essentially what they are, are arrays of numbers that are used to make predictions about things.” That means the GPU helps the phone accomplish tasks like understanding what the real scene is that the camera is seeing through its lens, a key part of how augmented reality works, or the way an app that does image recognition takes a guess at what it is seeing.
The GPU’s skills at powering machine-learning problems means it comes in handy in surprising ways—for example, it helps power an app called Nude that scans your camera roll (without using cloud computing; it does it on the device itself) for scandalous images and squirrels them safely away. That makes use of the phone’s GPU and a free feature called CoreML, which is an Apple-created technology that lets developers run machine-learning algorithms with their apps.
Ultimately, the GPU represents the way specialized engines on chips help them run more efficiently. “You’re always looking for specialization that targets a big enough class of problems that it makes it worthwhile,” Olukotun says. “So you combine graphics, images processing, and AI-machine learning, that’s a set of applications that you want to do on your phone that is compelling to have a GPU.”