U.S. hits snag in exascale supercomputer race

Budget woes could blunt U.S. efforts to build an exascale supercomputer and give China an opening to beat the DOE's new 2022 target.

Budget woes are forcing the U.S. Department of Energy to extend by two to four years its target for finishing work on an exascale system, increasing the chance that the Chinese will get there first.

At last month's SC12 supercomputer conference, the new timeline projecting delivery of a DOE exascale system between 2020 and 2022 was outlined by William Harrod, advanced scientific computing research division director in the Department of Energy's Office of Science.

The new projection assumes that Congress will fund the project in the federal government's fiscal 2014 budget, he added.

To date, the DOE's exascale development effort has received limited funding -- nowhere near the billions of dollars likely needed -- despite a consensus among scientists that the next generation of supercomputers could help deliver research breakthroughs, improve U.S. competitiveness and deepen understanding of huge problems like climate change.

Based in part on predictable increases in computing power, experts had previously expected that a working DOE exascale system would be ready by 2018.

The U.S. today remains far and away the world leader in high-performance computing (HPC). On the latest Top500 list of the most powerful supercomputers, 250 of the systems were built by U.S.-based tech firms.

But there's no guarantee that the U.S. will be the first to deliver an exascale system; China, Europe and Japan are all working hard on exascale initiatives.

China, in particular, has been investing heavily in HPC systems and related microprocessor and interconnect technologies.

Depei Qian, a professor at Beihang University and director of the Sino-German Joint Software Institute, told an audience at the SC12 conference that he expects China to remain three to five years behind the U.S. in the HPC race. But analysts are skeptical of that assessment. "The Chinese are being very polite -- their goal is to build [an exascale system] first," said IDC analyst Earl Joseph.

"The biggest problem [for U.S. exascale development] is the budget," said Harrod. "Until I have a budget, I really don't know what I'm doing."

Harrod previewed the DOE's "Exascale Computing Initiative" report, which calls for building prototype systems by 2018. The report is set to be presented to Congress in February.

The challenge to HPC developers around the world is clear.

Today's fastest computer, according to the Top500 list released last month, is a Cray XK7 supercomputer capable of running at up to 17.59 petaflops, meaning it can process 17.59 quadrillion calculations per second. That system, known as Titan, is installed at the DOE's Oak Ridge National Laboratory in Oak Ridge, Tenn.

The petaflop milestone was passed in 2008, when IBM's Roadrunner began operating at the Los Alamos National Laboratory.

An exascale supercomputer would be 1,000 times more powerful than the petaflop systems being deployed today. Developing such a system requires new programming models and new methods of managing data and memory, along with improved system resiliency.

It may be the biggest HPC challenge yet, and will likely require international collaboration. "There is no way to achieve these goals by any one government, one country," said Harrod. "It far exceeds what people are going to invest and also exceeds the technical talent."

He noted that HPC researchers are cooperating internationally in various ways, including working together to develop exascale systems software.

This version of this story was originally published in Computerworld's print edition. It was adapted from an article that appeared earlier on Computerworld.com.

Copyright © 2012 IDG Communications, Inc.

Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon