Exascale computing seen in this decade
Government support, as well as new computing techniques, will be needed
Computerworld - SEATTLE -- At the supercomputing conference here, there's an almost obsessive focus on developing an exascale computing system -- one that would be roughly 1,000 times more powerful than any existing system -- before the end of the decade.
In the lives of most people, something that's expected to happen eight or nine years in the future might seem like a long ways away, but here at SC11, it feels as if the end of the decade and the arrival of exascale computing are just around the corner. Part of the push is coming from the U.S. Department of Energy, which will fund the development of these massive systems. The DOE told the industry this summer that it wants an exascale system delivered in the 2019-2020 time frame that won't use more than 20 megawatts of power. The government has been seeking proposals about how to achieve that goal.
To put 20MW in perspective, consider the supercomputer that IBM is building for the DOE's Lawrence Livermore National Laboratory. Expected to be capable of operating at speeds of up to 20 petaflops, it will be one of the largest supercomputers in the world -- and one of the most energy efficient. When it's completely turned on next year, it will still use somewhere in the range of 7 to 8 megawatts of power, according to IBM. An exascale system would have 1,000 petaflops of computing power. (A petaflop is a quadrillion floating-point operations per second.)
"We're in a power-constrained world now," said Steve Scott, CTO of Nvidia's Telsa business. "The performance we can get on a chip is constrained not by the number of transistors we can put on a chip, but rather by the power."
Scott says x86 CPU technology is limited by its overhead processes. Graphics processing units (GPU), in contrast, provide throughput with very little overhead, and use less energy per operation.
Nvidia has been building high-performance computing (HPC) systems with its own GPUs and third-party CPUs. In its hybrid approach, the company has often used CPUs from Advanced Micro Devices, but it's also moving toward ARM processors, which are widely used in cellphones. The efforts may lead to the development of a hybrid product featuring a GPU integrated with an ARM processor.
Scott believes the DOE's 20MW goal can be achieved by 2022. But if the government's exascale program comes through with funding, Nvidia might be able to be more aggressive in circuit and architectural research, and it could then be possible to achieve the 20MW goal by 2019.
Scott said reaching that level of efficiency will require improving power usage by a factor of 50.
While 20MW might seem like a lot of power, Scott points out that there are cloud computing facilities that require as much as 100MW.
Rajeeb Hazra, general manager of technical computing at Intel, said his company plans to meet the 20 MW exascale goal by 2018 -- one year ahead of the U.S. government's expectation. He offered that prediction during the announcement of the company's unveiling of its Knights Corner product, a new 50-core processor that's capable of one teraflop of sustained performance.
While hardware makers deal with power and performance issues, HPC users are facing challenges in scaling codes to make full use of petaflop computing systems and the expected exascale systems.
Before reaching exascale, vendors will produce systems that can scale into the hundreds of petaflops. IBM, for instance, says its new Blue Gene/Q system will be capable of 100 petaflops.
Kimberly Cupps, the computing division leader and Sequoia project manager at Lawrence Livermore, said she would be happy with 20 petaflops.
"We're thrilled to have this machine so close to our grasp," she said of the 20 petaflop system. "We are going to solve many problems of national importance, ranging from materials modeling, weapons science, climate change and energy modeling."
Of IBM's claim that its system can scale to 100 petaflops, Cupps said, "That's IBM saying that; I'll vouch for 20."
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov, or subscribe to Patrick's RSS feed . His email address is email@example.com.
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Agility & Scalability for Oracle EBS R12 and RAC on VMware vSphere 5 This white paper outlines extensive performance and scalability testing of Oracle EBS applications on a Vblock™ Systems with vSphere 5.
- Oracle and VCE: The Next Step in Integrated Computing Platforms In this ESG Lab review you will learn how a VCE system driven by Oracle, delivers the perfect blend of high performance and...
- Migrate Oracle Apps from RISC/UNIX to Virtualized x86 Ready to move Oracle to a virtualized environment? This brief explains how true converged infrastructure can help you migrate from a RISC/UNIX environment...
- Step Out of the Bull's-Eye Learn about the evolution of targeted attacks, the latest in security intelligence, and strategic steps to keep your business safe.
- Keep Servers Up and Running and Attackers in the Dark An SSL/TLS handshake requires at least 10 times more processing power on a server than on the client. SSL renegotiation attacks can readily...
- On Demand: Mastering the Art of Mobile Content Management Mobile device usage in the enterprise has skyrocketed, and it continues to escalate. IT must answer to users who demand access to their... All High Performance Computing White Papers | Webcasts