SC2012: Top500 expects exascale computing by 2020
But supercomputing software will have to be designed differently to handle exascale workloads, researchers at SC2012 warn
IDG News Service - If the increase in supercomputer speeds continue at their current pace, we will see the first exascale machine by 2020, according to the maintainers of the Top500 compilation of the world's fastest systems.
System architects of such large computers, however, will face a number of critical issues, a keeper of the list warns.
"The challenges will be substantial for delivering the machine," said Jack Dongarra, a University of Tennessee, Knoxville, researcher who is one of the principals behind the Top500. Dongarra spoke at the SC2012 conference, being held this week in Salt Lake City, during a presentation about the latest edition of the list, released Monday.
We still have a way to go before exascale performance is possible. An exascale machine would be capable of one quintillion FLOPS (floating point operations per second), or 10 to the 18th FLOPS. Even today's fastest supercomputers offer less than 20 percent of the capability of an exascale machine.
In the most recent edition of the Top500 list of supercomputers, released Monday, the fastest computer on the list was the Oak Ridge National Laboratory Titan system, a machine capable of executing 17.59 petaflops. A petaflop is a quadrillion floating point calculations per second, or 10 to the 15th FLOPS.
But each new Top500 -- the list that is compiled twice a year -- shows how quickly the speeds of supercomputers grow. Judging from the list, supercomputers seem to gain tenfold in power every 10 years or so. In 1996, the first teraflop computer appeared on the Top500, and in 2008, the first petaflop computer appeared on the list. Extrapolating from this rate of progress, Dongarra estimates that exascale computing should arrive around 2020.
The High Performance Computing (HPC) community has taken on exascale computing as a major milestone. Intel has created a line of massively multicore processors, called Phi, that the company hopes could serve as the basis of exascale computers that could be running by 2018.
In his talk, Dongarra sketched out the characteristics of an exascale machine. Such a machine will likely have somewhere between 100,000 and 1,000,000 nodes and will be able to execute up to a billion threads at any given time. Individual node performance should be between 1.5 and 15 teraflops and interconnects will need to have throughputs of 200 to 400 gigabytes per second.
Supercomputer makers will have to construct their machines so that their cost and power consumption do not increase in a linear fashion along with performance, lest they grow too expensive to purchase and run, Dongarra said. An exascale machine should cost about $200 million, and use only about 20 megawatts, or about 50 gigaflops per watt.
- Path Selection Infographic Path Selection Infographic
- Hyperconvergence Infographic A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Preparing Your Infrastructure for the Hyperconvergence Era From cloud computing and virtualization to mobility and unified communications, an array of innovative technologies is transforming today's data centers.
- How WAN Optimization Helps Enterprises Reduce Costs If you wanted to break down innovation into a tidy equation, it might go something like this: Technology + Connectivity = Productivity. Productivity...
- Redefine Your IT Operations: Remote Office IT Has Never Been Simpler Join us to see why PC Pro named Dell PowerEdge VRTX the "2013 Server of the Year." PowerEdge VRTX may be just what...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users? All Hardware White Papers | Webcasts