Japan answers China's supercomputing surge
Japan's K Computer passes 8 petaflop barrier, but also sets new power consumption record
Computerworld - A new supercomputer from Japan whose performance passed the 8 petaflop milestone ended China's brief stay atop the Top500 list of the world's fastest supercomputers.
The new Top500 leader, the K Computer housed at the Riken Advanced Institute for Computational Science in Kobe, Japan, runs 68,544, eight-core Sparc chips made by Japan-based Fujitsu. The system is expected to eventually run some 80,000 of the Sparc processors.
The Japanese system also set a new Top500 power-consumption record, achieving a 10 megawatt power rating while running the Linpack test used to determine system performance. Despite the significant power consumption, the K Computer achieved "extraordinarily high computing efficiency," said Riken and Fujitsu, in a statement.
The Chinese Tianhe-1A supercomputer, took the No. 1 position last November with a performance of 2.57 petaflops.
President Barack Obama referenced China's accomplishment in speeches after last fall's Top500 rankings were announced.
Supercomputer developers are keenly aware that they will need huge gains in power efficiency to reach supercomputing's next big goal -- building an exascale class system (1,000 times more powerful than a petascale system) by 2018.
"The problem is that power consumption is increasing," said Erich Strohmaier, who heads the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory and is a founder of the Top500 list.
"Even if it is not desirable, we can adapt to 10 megawatts for the very largest systems, but we cannot allow power consumption to grow much more," said Strohmaier. "This has been realized in the U.S. research community for a while, and the Exascale initiative of the [U.S. Department of Energy] is addressing this issue directly. Power consumption is already influencing computer design decisions and will have a big influence on the details of future HPC [high-performance computing] architectures."
The average power consumption of top 10 systems in the latest Top500 list is 4.3MW, up from 3.2MW just six months ago.
There is a major focus globally on developing the architecture, software and hardware needed to produce a system capable of reaching supercomputing's next milestone.
Dave Turek, vice president of exascale computing at IBM, puts 20MW as the ideal range for power consumption by an exascale system.
Turek's view was underscored by the DOE. In a report earlier this year, the agency put the "practical power limit" of an exascale system at 20MW.
Achieving that goal would require a reduction in power consumption per operation in the range of 300 times less than what is now required for petascale systems, the DOE reported.
If an exascale system can be built at 20MW, said Turek, it would mean that a petaflop can be delivered at 20 kilowatts. Such a system would put the kind of computation power represented by Japan's K Computer within reach of the typical corporate data center.
Building an exascale system at 20MW "will be truly transformational to the IT industry, like nothing that has never been done in HPC before," said Turek.
Turek said the path to creating an exascale system is unlike that which was followed to achieve petascale performance. "We've been maybe a little too cavalier as an industry in terms of just extrapolating from the past and thinking that's a pathway to the future," he said.
Major innovations will be required in memory architectures, interconnects, optical technologies and other elements of the systems, Turek added. Making the task more difficult is the fact the many of the pieces are built by a wide range of vendors.
The research effort will also require government funding, Turek said.
"From our perspective as a company, we're going to spend a ton of money in this area -- an absolute ton of money to drive [HPC] technology," said Turek.
IBM has built 213, or nearly 43%, of the systems on the latest Top500 list. It is followed by Hewlett-Packard, which built 153, or about 31%, of the systems on the list.
The increasing global competition by may be helping to change the benchmarks for measuring supercomputing power.
The Linpack test, which measures floating point computing power, has long dominated the market and isn't going away, but the industry may have to start giving more attention to new benchmarks for measuring system capabilities. These could include the Graph 500, which ranks processing of data-intensive applications, and the Green500, which ranks systems in terms of energy efficiency.
The U.S. has five of the top 10 systems in the latest Top500 list, while Japan and China each have two and France has one.
The U.S. also dominates the full Top500 list; 256, or 51%, of the systems on the list are in the U.S. China is No. 2, with 62 of the systems on the list, or about 12% of the total. Germany has the third most systems on the list with 30, followed by the U.K. with 27, Japan with 26 and France with 25.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His email address is firstname.lastname@example.org.
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- D-Wave pitches quantum co-acceleration to supercomputing set
- Why the U.S. may lose the race to exascale
- Top500 shows growing inequality in supercomputing power
- Supercomputing's big problem: What's after silicon?
- Cray brings Hadoop to supercomputing
- Intel rushes to exascale with redesigned Knights Landing chip
- China still has the fastest supercomputer in the world
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- The Critical Role of Support in Your Enterprise Mobility Management Strategy Most business leaders underestimate the importance of tech support when they choose an EMM solution. Here's what to put on your checklist.
- Separating Work and Personal at the Platform Level: How BlackBerry Balance Works BlackBerry® Balance™ separates work from personal on the same mobile device, right at a platform level. Find out how it can work for...
- Protection for Every Enterprise: How BlackBerry Security Works Get an IT-level review of BlackBerry® Security, addressing data leakage protection, certified encryption, containerization and much more.
- Getting Ready for BlackBerry Enterprise Service 10.2 Find out how BlackBerry® Enterprise Service 10 helps organizations address the full spectrum of EMM challenges, while balancing the needs of both the...
- Containerization Options: How to Choose the Best DLP Solution for Your Organization This webcast outlines a framework for making the right choice when it comes to containerization approaches, along with the pros and cons of... All High Performance Computing White Papers | Webcasts