The march toward exascale computers

High-performance computing is reaching new heights, especially in China.

1 2 Page 2
Page 2 of 2

As for the embargo's likely effectiveness, #1 on the Top500 list happens to be China's Sunway TaihuLight at the National Supercomputing Center in Wuxi. It sustains a performance of 93 petaflops using 10,649,600 cores, all of them 1.45GHz Sunway (also rendered ShenWay) SW26010 devices, which fit Dongarra's description of "lightweight" processors. And all were made in China.

China's Sunway TaihuLight supercomputer Jack Dongarra, Report on the Sunway TaihuLight System, June 2016

"They are trying to satisfy their internal customer demands without being held hostage to any other country or technology paradigm," says Dekate. "They had demonstrated an ability to do their own processor components and chip sets, although they are struggling in the memory space."

In response, the U.S. plans to have a supercomputer by early 2018 with roughly double the performance of China's newest and most powerful system. The U.S. Department of Energy's Oak Ridge National Laboratory is expecting an IBM system -- named Summit, capable of 200 petaflops -- in early 2018.

90126 web Oak Ridge National Laboratory

The Summit supercomputer at Oak Ridge National Laboratory will have IBM Power9 chips and deliver 200 petaflops of performance when deployed in early 2018.

Eyes on the prize

As for the arrival of an exaflop machine, the question seems to be more where rather than when.

"The Top500 list shows we had the first teraflop machine in 1997, and in 2007 we achieved the first petaflop machine, or three orders of magnitude in a decade," says Dongarra. "If you extrapolate the data it shows we will have an exaflop machine in 2020, but that prediction is dangerous since the scale is logarithmic. A lot has to happen to make that achievement in 2020."

"We could get to exascale tomorrow but you wouldn't want to pay the power bill," says HP's Mannel. "It would be expensive in terms of space and power and hardware costs, and due to component failure rates would only run for a few hours at a time. For exaflop computing with the power and space and reliability that today we enjoy with petaflop computing, we must wait for the first third of the 2020s," Mannel says.

As for where it will happen, "China will likely get there first as they are extremely motivated to continue their leadership," Gartner's Dekate says. "They have a far more comprehensive strategy than we give them credit for. They are looking to lead the exascale race, and their threshold for pain [when paying for HPCs] is much higher. The U.S. and European owners are far more pragmatic and they want to design the hardware to ensure the applications scale up, and provide the best power efficiency and widest range of application support possible."

But beyond becoming more powerful, sources also believe that users will be finding more sophisticated ways to use HPCs.

"At Cray, we believe that in five to seven years supercomputers will not just be for solving physics problems," says Bolding. "We see a combination of analytics, simulation and modeling, with a convergence of big data. People will want fast insights into their data to predict the risks they face and decide how best to use resources."

That is what they are doing at the University of Michigan, whose IBM supercomputer (called Conflux) is too new to be on the list. (At 1,200 cores, it probably never will make the list, as the smallest has more than 5,000.)

"There are problems even the fastest machine in the world 50 years from now will not be able to solve," notes aerospace engineering Prof. Karthik Duraisamy. "For a problem like engine combustion, for the correct answer you have to simulate every atom, and that is not possible. The next best thing is to use some approximation, and your model becomes inaccurate. So we use machine learning to improve approximation."

A large machinecan also have two workflows -- modeling and machine learning -- going at the same time and interacting, while the users make adjustments until the desired results are obtained, he explains.

Biomedical engineering Prof. Alberto Figueroa says he hopes to perform optimal surgical planning in 24 hours, rather than weeks. Mechanical engineering Prof. Krishna Garikipati says he hopes to combine various physical models from various domains (quantum through Newtonian) to finally be able to say exactly when a piece of steel will break under exactly what load.

Finally, keep in mind that the most powerful machine on the original list 23 years ago delivered 60 gigaflops. Today, a quad-core desktop running 3Ghz delivers 48 gigaflops, assuming an average four flops per clock cycle. That would have put it at #2 on the 1993 list. Does this mean we can expect petaflop desktops in two decades?

"Of course," Dongarra says.

Copyright © 2016 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon