China's 'big hole' marks scale of supercomputing race
1,000 U.S. scientists are involved in exascale development, but China and Europe have stepped up their investment, IBM warns
Computerworld - WASHINGTON -- To make a point about China's interest in supercomputing, David Turek, IBM's vice president of deep computing, displayed a slide with a picture depicting a large construction site for a building that will house a massive computer.
Speaking at an IEEE-USA forum here on Thursday, Turek pointed to a photo (below) of a supercomputing center being built in Shenzhen, China, and said, "That's a truck -- that's a big truck, that's a big hole, and that's going to be a big building. And that's only the first building they are going to build there."
It's not just China that's rushing to build supercomputing centers, but Europe and Japan as well, said Turek and other forum participants.
"You have sovereign nations making material investments of a tremendous magnitude to basically eat our lunch, eat our collective lunch," Turek said.
The intent of the forum was to update attendees on efforts that are underway to build the next generation of supercomputers -- exaflop systems that will be 1,000 times more powerful than today's petaflop systems. A petaflop is a quadrillion, or 1,000 trillion, sustained floating-point operations per second. An exaflop is 1 quintillion, or 1 million trillion, floating-point operations per second.
The main message was that other nations are beginning to challenge U.S. supercomputing leadership, something that has implications for every aspect of U.S. leadership in science and product development, said experts at the forum.
"Within a year, there will be more Top500 systems in China than there are in Europe collectively," said Turek, referring to the list of the world's 500 most powerful supercomputers, which is regularly updated by academic researchers in the U.S. and Europe. China has 24 systems on the most recent Top500 list. The United States leads with 282; the U.K. has 38, France has 27, and Germany has 24.
"The No. 1 goal is to maintain U.S. leadership in high-performance computing," said Rick Stevens, associate director for computing, environment and life sciences at the Argonne National Lab.
That means having an exascale system by 2020.
An exascale system could deliver enormous benefits and take science into areas "not possible today without more computing power," said Stevens, noting that such system would be powerful enough to simulate everything that's going on inside a human cell.
There are about 1,000 scientists involved in the U.S. Exascale Initiative, said Stevens. But Europe and China, in particular, "have accelerated their investment" in developing high-performance systems, he said.
Europe has "launched their own exascale program to compete with the U.S.," said Stevens, who added that Europeans are now on a faster development pace and might bypass the U.S. "if we don't sustain the investment to stay ahead."
But the approach to building an exascale system will be different than the approach to building supercomputing systems in the past, Stevens said.
A petascale system today might have about 200,000 processing cores; an exascale system might use anywhere from 100 million to as many as 1 billion cores, he said.
The power demands for such a system could be enormous. A 2-petaflop system needs about 2 megawatts, and keeping power needs from running out of control on a system many times more powerful requires improvements in power efficiency.
To build an exascale system, Stevens said scientists are working toward a "co-design" where all of the elements needed to make such a system possible -- application software and programming models and improvements in optics and memory, among other things -- are being done in tandem with hardware development.
But for now, the race is on. China has the world's second-most-powerful supercomputer on the planet, the Nebulae, a 1.27-petaflop system, according to the most recent Top500 ranking. The top system is Cray's 1.76-petaflop Jaguar supercomputer at the U.S. Department of Energy's Oak Ridge National Laboratory
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His e-mail address is email@example.com.
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Best iPhone, iPad Business Apps for 2014
- 14 Tech Conventions You Should Attend in 2014
- 10 Desktop Apps to Power Your Windows PC
- How to Add New Job Skills Without Going Back to School
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
Red Hat Enterprise Linux - The Original Cloud Operating System
Linux adoption is growing against a number of measures, such as the
number of supercomputers that run Linux and the size of the contributing...
- OpenStack Hype vs. Reality: CIO Quick Pulse Open-source architecture can enable IT departments to build infrastructure-as-a-service (IaaS) clouds running on standard hardware.
- Building a Bridge to the Next Generation Data Center Selecting a widely adopted operating system is a foundational component of a standardization strategy.
- Webinar: Building a Big Data solution that's production-ready Big data solutions are no longer just a nice-to-have.
- Meg Whitman presents Unlocking IT with Big Data During this Web Event you will hear Meg Whitman, President and CEO, HP discuss HAVEn - the #1 Big Data platform, as well... All High Performance Computing White Papers | Webcasts