Why the U.S. may lose the race to exascale

Unlike the U.S., Japan and Europe have set firm goals for systems by 2020, and China could beat everyone

1 2 Page 2
Page 2 of 2

Exascale computing isn't seen as just a performance goal. A nation's system can be designed to run a wide range of scientific applications, although there are often concerns that if it takes too much power to run, it might not be financially viable.

There is, nonetheless, a clear sense that HPC is at an exciting juncture, because new technologies are needed to achieve exascale. DRAM, for instance, is too slow and too expensive to support exascale, which is one million trillion calculations per second, or 1,000 times faster than the single petaflop systems available today. Among the possibilities is phase-change memory, which has 100 times the performance of flash memory products.

Developing those new technologies will require major research investments by governments. The gridlock in Congress is partially to blame for the absence of major exascale funding, something that's at least on par with Europe. But political gridlock isn't wholly to blame. The White House's recent emphasis on big data is seen by some as delivering mixed messages about U.S. focus. The Department of Energy (DOE) has yet to offer up a clear exascale delivery date, simply describing the goal more generally as "in the 2020 timeframe."

A major constraint is the cost of power. Roughly, 1 megawatt a year costs $1 million. While the DOE has set a goal of building an exascale system that uses 20 megawatts or less, Joseph said that may be too stringent a goal. Instead, he envisioned 50-to-100-megawatt data centers built to support large-scale systems.

Dongarra and others remain optimistic that Congress will deliver on funding. There is clear bipartisan support. In the U.S. House, Rep. Randy Hultgren (R-Ill) has been working to get funding passed, and has 18 co-sponsors from both parties. Similar efforts are under way in the Senate.

Global exascale competition isn't necessarily about the basic science or the programming.

The Department of Energy's Argonne National Laboratory, for instance, just announced a cooperation agreement on petascale computing with Japan. Peter Beckman, a top computer scientist at the laboratory and head of an international exascale software effort, said the pact calls for information sharing with Japanese HPC scientists. The two groups are expected to discuss how they manage their machines, their power and other operational topics. The effort is analogous to Facebook's Open Compute project, where some aspects of data center designs and operations are openly shared.

"We're not competing at this level," said Beckman. "We're just trying to run stuff."

On a broader scale, there is considerable effort internationally on writing programs for large-scale parallel machines, but no agreement on approach.

"That is one area where people really want to work together," said Beckman. "You want to be able to write portable code, and there does not seem to be competition in that. We want the railroad gauge to be the same in every country, because it just makes our lives are lot easier."

Patrick Thibodeau covers cloud computing and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at  @DCgov or subscribe to Patrick's RSS feed . His e-mail address is pthibodeau@computerworld.com.

See more by Patrick Thibodeau on Computerworld.com.

Copyright © 2013 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
  
Shop Tech Products at Amazon