Exascale computing seen in this decade
Government support, as well as new computing techniques, will be needed
Computerworld - SEATTLE -- At the supercomputing conference here, there's an almost obsessive focus on developing an exascale computing system -- one that would be roughly 1,000 times more powerful than any existing system -- before the end of the decade.
In the lives of most people, something that's expected to happen eight or nine years in the future might seem like a long ways away, but here at SC11, it feels as if the end of the decade and the arrival of exascale computing are just around the corner. Part of the push is coming from the U.S. Department of Energy, which will fund the development of these massive systems. The DOE told the industry this summer that it wants an exascale system delivered in the 2019-2020 time frame that won't use more than 20 megawatts of power. The government has been seeking proposals about how to achieve that goal.
To put 20MW in perspective, consider the supercomputer that IBM is building for the DOE's Lawrence Livermore National Laboratory. Expected to be capable of operating at speeds of up to 20 petaflops, it will be one of the largest supercomputers in the world -- and one of the most energy efficient. When it's completely turned on next year, it will still use somewhere in the range of 7 to 8 megawatts of power, according to IBM. An exascale system would have 1,000 petaflops of computing power. (A petaflop is a quadrillion floating-point operations per second.)
"We're in a power-constrained world now," said Steve Scott, CTO of Nvidia's Telsa business. "The performance we can get on a chip is constrained not by the number of transistors we can put on a chip, but rather by the power."
Scott says x86 CPU technology is limited by its overhead processes. Graphics processing units (GPU), in contrast, provide throughput with very little overhead, and use less energy per operation.
Nvidia has been building high-performance computing (HPC) systems with its own GPUs and third-party CPUs. In its hybrid approach, the company has often used CPUs from Advanced Micro Devices, but it's also moving toward ARM processors, which are widely used in cellphones. The efforts may lead to the development of a hybrid product featuring a GPU integrated with an ARM processor.
Scott believes the DOE's 20MW goal can be achieved by 2022. But if the government's exascale program comes through with funding, Nvidia might be able to be more aggressive in circuit and architectural research, and it could then be possible to achieve the 20MW goal by 2019.
Scott said reaching that level of efficiency will require improving power usage by a factor of 50.
While 20MW might seem like a lot of power, Scott points out that there are cloud computing facilities that require as much as 100MW.
Rajeeb Hazra, general manager of technical computing at Intel, said his company plans to meet the 20 MW exascale goal by 2018 -- one year ahead of the U.S. government's expectation. He offered that prediction during the announcement of the company's unveiling of its Knights Corner product, a new 50-core processor that's capable of one teraflop of sustained performance.
While hardware makers deal with power and performance issues, HPC users are facing challenges in scaling codes to make full use of petaflop computing systems and the expected exascale systems.
Before reaching exascale, vendors will produce systems that can scale into the hundreds of petaflops. IBM, for instance, says its new Blue Gene/Q system will be capable of 100 petaflops.
Kimberly Cupps, the computing division leader and Sequoia project manager at Lawrence Livermore, said she would be happy with 20 petaflops.
"We're thrilled to have this machine so close to our grasp," she said of the 20 petaflop system. "We are going to solve many problems of national importance, ranging from materials modeling, weapons science, climate change and energy modeling."
Of IBM's claim that its system can scale to 100 petaflops, Cupps said, "That's IBM saying that; I'll vouch for 20."
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His email address is firstname.lastname@example.org.
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- Why Projects Fail CIOs are expected to deliver more projects that transform business, and do so on time, on budget and with limited resources.
- The New Business Case for Video Conferencing: 7 Real-World Benefits Beyond Cost-Savings This whitepaper provides insight into the value of video conferencing in today's business environment, and how organizations are using visual collaboration to find...
- Gartner Magic Quadrant for Client Management Tools The client management tool market is maturing and evolving to adapt to consumerization, desktop virtualization, and an ongoing need to improve efficiency.
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users?
- On-demand webinar: "Mobility Mayhem: Balancing BYOD with Enterprise Security" Check out this on-demand webinar to hear Sophos senior security expert John Shier deep dive into how BYOD impacts your enterprise security strategy... All High Performance Computing White Papers | Webcasts