SHARE

The latest TOP500 rankings of the world’s fastest supercomputers is out today, and as expected Oak Ridge National Laboratory’s Titan has unseated Lawrence Livermore National Laboratory’s Sequoia in for the number one seat. That means a couple of things. For one, it represents something of a proving out for co-processor technology (that’s technology that uses graphics processors alongside conventional processors to accelerate a machine’s performance), which drove Titan’s performance over the top. Secondly, it means it’s been a really good year for American supercomputing.

Titan is the successor to ORNL’s Jaguar supercomputer (actually, it’s really an upgrade of Jaguar), which itself sat atop the TOP500 rankings a few years ago. Since then, computers in China, Japan, and elsewhere have really given the U.S. a serious run for its money, unseating Jaguar for the number one position and pushing other systems down the rankings at times. For ORNL and the U.S. Department of Energy, it has to be nice to have two separate systems sitting in the top two spots on the TOP500.

But more significantly, today’s rankings demonstrate that hybrid processing–using graphics processing units (GPUs) to augment the central processing units traditionally deployed in supercomputers–could be the real way forward for supercomputing, at least in the near term. Titan officially recorded speeds of 17.59 petaflops (that’s quadrillions of calculations per second), yet the entire computing apparatus fits into the same 200 server cabinets as the 2.3-petaflop Jaguar. That’s because GPUs are better at some kinds of operations that CPUs aren’t necessarily so good at–especially operations that involve more parallel kinds of computing. By coupling NVIDIA Tesla GPUs with 16-core AMD CPUs, the engineers at Cray that built Titan for ORNL were able to increase computing performance by a factor several times greater than its increase in energy consumption (or size).

That’s very important going forward. The goal across the globe right now is to develop an exascale machine (an exaflop is equivalent to 1,000 petaflops–so we have a ways to go), and the U.S. is trying to do so on a budget. So proving that GPU augmented machines can outperform the very best supercomputers in the world could mark the beginning of a sea change in the science and engineering of supercomputing.

Right now, 62 systems on the TOP500 list are accelerated in this way, including China’s Tianhe-1A, which sat atop the list back in 2010 but now ranks number eight. That’s up from 58 systems six months ago, and the number is likely to continue growing. The time may soon be upon us when it’s simply unfeasible to compete for the top 10 or even the top 25 systems in the world without leveraging some kind of acceleration technology, and that’s probably a good thing. More efficient methods of crunching more data are what will lead us to the exascale age–a scale at which many computer scientists think things will get really interesting.

Rounding out the top five: Japan’s K Computer at number three, Argonne National Laboratory’s Mira (which will soon begin running the world’s largest cosmological simulations), and Germany’s JUQUEEN, which is now Europe’s most powerful system after a recent upgrade. On top of the aforementioned three in the top five, America has two more machines in the top 10–placing half of the fastest ten computers in the world within the U.S. We’ll take it.

The next rankings will be released in June of 2013.