The Fight For The Fastest Supercomputer

Who will get to exascale first—China or the U.S.?

Share

In 2015, China claimed the record for the world’s most powerful supercomputer—for the third year in a row. The Tianhe-2 crushed the U.S.’s Titan in the esteemed TOP500 supercomputer ranking. And that from a country with only 7 percent of the world’s supercomputers (compared with the U.S.’s 46 percent). Computing power translates into economic power and national security, so falling behind should come as a wake-up call. This should be our “Sputnik moment.”1

As with the rush to orbit, “this is an international race,” says Horst Simon, deputy director at Lawrence Berkeley National Laboratory and co-editor of the TOP500. The next mile marker will be achieving “exascale” computation, which means performing 1 quintillion calculations per second (about 30 times faster than Tianhe-2). Whoever gets there first could revolutionize weather forecasting, design hyperefficient airliners, and fight disease with precision medicine—not to mention corner the market for computing power.

The U.S. was once the undisputed front-runner, but since the Great Recession, our pace (read: investment) has slumped, Simon says. That’s not unique to the U.S.; investment has slowed worldwide. But “computing transforms itself on an exponential time­scale. If you sleep for three years, you’re two generations behind,” he says. “We slept for five years.” Now China might be on track to beat us to exa­scale by the early 2020s.

The U.S. was once the undisputed front-runner, but since the Great Recession, our pace has slumped.

Reaching exascale will be far harder than achieving historic gains though. We’re hitting a power wall, says Steve Scott, chief technology officer at supercomputing firm Cray. “We can no longer run all our transistors full out; it would generate too much heat and literally burn up the chips.” In other words, improvements in power efficiency aren’t keeping up with Moore’s Law.2

Instead, Scott says, we may need to rethink computing. If we can replace today’s brawny chips with lots of simpler, more energy-efficient processors, they could handle enormous series of computations in parallel, like yoking an army of ants rather than a few hungry elephants. But that means figuring out how to break down a computational problem into tiny parts for separate and nonsequential processing—a nightmare for software developers. (Which explains why such a leap has not yet been made.)

The U.S. government is engineering a comeback, if slowly. In July, the Obama administration established a National Strategic Computing Initiative to get back into international running. “It’s an all-out approach,” says Scott. No longer will agencies pursue isolated and arguably underfunded supercomputing goals—so long as Congress ponies up the extra cash. The aim is to coordinate government agencies, academic institutions, and private companies.

“We need to begin strategically ramping up our investment now,” says Tom Kalil, deputy director for technology and innovation in the White House’s Office of Science and Technology Policy. If we can combine the U.S.’s collective intellectual forces, as in the space race, we might be the first to reach the proverbial exa-moon.

The U.S. government is engineering a comeback, if slowly.

1. In 1957, the Soviet Union launched the first satellite, Sputnik I, into orbit, spurring America to create NASA and DARPA, as well as train a generation of scientists and engineers.

2. In 1975, Intel co-founder Gordon E. Moore observed that the processing power (i.e., number of transistors) of an integrated circuit doubles every two years.

This article was originally published in the December 2015 issue of Popular Science.

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.