Europe focuses on high-performance computing, but is it too late?

European researchers and politicians are counting on the development of a few key high-performance computing (HPC) centers to boost science and industry across the European Union. However, their bid to build a supercomputer capable of a million billion calculations per second is already falling behind the competition, with Japan and the U.S. well on their way to that goal.

"Supercomputers are the 'cathedrals' of modern science, essential tools to push forward the frontiers of research at the service of Europe's prosperity and growth," said Viviane Reding, European commissioner for the Information Society, at the Tuesday opening of the Teratec conference on HPC on the outskirts of Paris.

France, a hitherto secular state, is starting to see the value of such cathedrals to computing: "HPC is a competitive factor not just for research, but for the whole economy," said French Minister for Research Valérie Pécresse. "We have to catch up in HPC."

Supercomputing performance is typically measured in teraflops, or trillion floating-point operations per second. With new supercomputers coming online, France has increased its HPC capacity by a factor of 25 in six months, to around 470 TFLOPS, she said.

One thousand TFLOPS make a petaflop, Pécresse's next target: "Our goal should be the creation of petaflop computing centers."

But that target won't be reached by France -- or any other European country -- working alone.

Through the Partnership for Advanced Computing in Europe (PRACE), 14 European countries, including France, Germany, the Netherlands, Spain and the U.K., plan to direct their existing national HPC efforts toward the creation of three to five "petascale" European supercomputer centers, each with computing power in excess of 1 PFLOPS.

The cost over 20 years of building and maintaining a petascale supercomputing facility could top €2 billion ($3.1 billion U.S.), according to Achim Bachem of Jülich Research Centre, a German HPC laboratory. He estimates annual running costs at €100 million to €200 million, with the cost of construction around twice that. Yet such systems are soon overtaken in performance by newer models, and must be replaced every two or three years to keep up with advances in technology.

In comparison with those costs, the €40 million that the European Commission has pledged to set up PRACE is barely seed capital.

While Europeans are still debating what legal structure to give PRACE (a question that should be resolved by the end of next year), other countries are busy building computers.

Japan began work on its petascale processing project, the Next Generation Supercomputing Center, in 2006. By the time it is complete in 2012, it will have swallowed ¥115 billion ($1.1 billion U.S.).

Work on the computer, which will be built on an island next to Kobe airport, is already well advanced: "We are in the middle of the detailed design of the computer chips and computer system," said Toshikazu Takada of the computational science research program at Japan's Riken research center. Japan's goals in building a petascale system are much like Europe's: to retain control of cutting-edge technologies, to promote application development and to create new IT businesses.

The U.S. may already have its first petascale computer, according to Rick Stevens, an associate laboratory director at Argonne National Laboratory.

"It looks like Los Alamos, and IBM may have reached a petaflop with Roadrunner," he said.

Roadrunner was built for Los Alamos National Laboratory by IBM, using 6,912 dual-core Opteron processors from Advanced Micro Devices Inc. and 12,960 of IBM's Cell eDP accelerators. Early indications are that the machine's Cell processors have reached a score of 1.33 PFLOPS, as measured by the Linpack benchmark, while the Opterons reached 49.8 TFLOPS, Stevens said.

With the petascale barrier seemingly broken, researchers are starting to ask what's needed to build an exascale system, one capable of a billion billion calculations per second, or a thousand times faster even than Roadrunner.

Extrapolating from the increases in cores per node, the number of nodes and the power consumption of supercomputers over recent years, Stevens predicts that by 2019, we could have supercomputers capable of 1.2 exaflops, and containing 400,000 nodes with 128 processor cores each -- and consuming a staggering 80MW.

By that stage, though, competition between Europe, Japan and the U.S. for the supercomputing crown may no longer be an option.

"We may have to cooperate at a global level to push past 1 petaflop," said Stevens.

Copyright © 2008 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon