China surpassing U.S. with 54.9 petaflop supercomputer
Intel-based system has China poised to take the global lead in Top500 supercomputing list this month
Computerworld - China has produced a supercomputer capable of 54.9 petaflops, more than twice the speed of any system in the U.S., according to a U.S. researcher who was in China last week and learned the details.
China's latest system was built with Intel chips, but includes indigenously produced Chinese technologies as well. The Chinese government spent about $290 million on it.
Today, the world's fastest supercomputer is at Oak Ridge National Laboratory in Tennessee. The Cray system was running at nearly 18 petaflops, according to last November's biannual Top 500 list. That list will be updated in mid-June.
With its new supercomputer, China is raising the stakes in supercomputing for the U.S., as well as for Japan and Europe. It is showing a willingness to push for leadership in HPC and the race to develop the next generation of systems, exascale.
Jack Dongarra, a professor of computer science at the University of Tennessee and one of the academic leaders of the Top 500 supercomputing list, posted a detailed description late Sunday of China's latest system ( report PDF ) from his trip to China. His findings are based on a briefing at an HPC conference May 28-29 in Changsha by a Chinese official from the National University of Defense Technology (NUDT).
HPC Wire reported on the new system this weekend.
China's latest large system is the successor to its Tianhe-1A supercomputer, which won the global title as the world's fastest in November 2010. President Obama made note of China's supercomputing accomplishment in his state of the union speech in January, 2011, where he said the U.S. was facing another "Sputnik moment" in wide range of technologies.
China's latest supercomputer, called Tianhe-2 or Milkyway-2, has 32,000 multicore Intel Xeon Ivy Bridge chips, and 48,000 Xeon Phi chips, a co-processor based on Intel's MIC (Many Integrated Core) architecture.
Each Phi processor is capable of more than teraflop of speed, or one trillion floating point operations per second. A petaflop is 1,000 teraflops, or one quadrillion floating-point operations per second. An exascale system is 1,000 petaflops.
Dongarra's report suggests that China may have the leading system for some time. "The next large acquisition of a supercomputer for the U.S. Department of Energy will not be until 2015," he wrote.
China has been developing its own chip technology and has been mixing and matching homegrown tech with imported components. U.S. researchers believe China is heading in the direction of building a supercomputer made entirely of indigenously produced components, including chips.
The approach of combing China-built technology with American products, is evident in Tiahne-2.
"There are number of features of the Tianhe-2 that are Chinese in origin, unique and interesting," said Dongarra, in his report. These include a proprietary interconnects, and the Galaxy FT-15, a 16-core processor. He cited the "apparent reliability and scalability" of the system as well.
The system's power usage, when cooling is considered, is 24 MWs. Power is major issue in achieving exascale. Researchers could assemble, theoretically, an exascale computing system with current technology. But at a billion or so cores, it would need its own power plant to operate.
To reach exascale, HPC researchers say they need to develop processors, memory and network components that substantially reduce power use. New programming models are also being developed. The problems in achieving exascale are such that Europe, which is investing heavily in its own HPC effort, believes there is a potential to leapfrog the U.S. if breakthrough approaches are discovered to some of these problems.
U.S. researchers, as recently as last month, warned Congress that the U.S., while the undisputed leader in HPC today, is at risk of falling behind in HPC development unless it commits hundreds of millions of dollars to exascale research. But the ongoing budget dispute and sequestration is leading to a reduction in R&D spending.
China wants to produce an exascale system before 2020. The U.S., at its present effort, won't produce an exascale system until around 2025, lawmakers were told last month.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His email address is email@example.com.
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- D-Wave pitches quantum co-acceleration to supercomputing set
- Why the U.S. may lose the race to exascale
- Top500 shows growing inequality in supercomputing power
- Supercomputing's big problem: What's after silicon?
- Cray brings Hadoop to supercomputing
- Intel rushes to exascale with redesigned Knights Landing chip
- China still has the fastest supercomputer in the world
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- Path Selection Infographic Path Selection Infographic
- Hyperconvergence Infographic A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Preparing Your Infrastructure for the Hyperconvergence Era From cloud computing and virtualization to mobility and unified communications, an array of innovative technologies is transforming today's data centers.
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users? All High Performance Computing White Papers | Webcasts