Exascale unlikely before 2020 due to budget woes
Prototype systems still eyed for 2018, but only if Congress approves billions in funding, say U.S. DOE officials
Computerworld - SALT LAKE CITY, Utah -- The U.S. Dept. of Energy, which builds the world's largest supercomputers, is now targeting 2020 to 2022 for an exascale system, two to four years later than earlier expectations.
The new timeframe assumes that Congress will fund the project in the fiscal 2014 budget. The White House will deliver its budget request to Congress early next year for fiscal 2014, which begins next Oct. 1.
Despite a belief among scientists that exascale systems can help deliver breakthrough scientific breakthroughs, improve U.S. competitiveness and deepen the understanding of problems like climate change, the development effort has so far received limited funding -- nowhere near the billions of dollars likely needed.
Experts had previously expected an exascale system to arrive in 2018. Those expectations were based, in part, on predictable increases in compute power.
In 1997, the ASCI Red supercomputer built by Intel and installed at the Sandia National Laboratory broke the teraflop barrier, or one trillion calculations per second. ASCI Red cost $55 million to build.
By comparison, Intel's just released Phi 60-core co-processor, which is also capable of one teraflop, is priced at $2,649.
In 2008, a decade after ASCI Red debuted, IBM's Roadrunner began operating at Los Alamos National Labs. Roadrunner operated at petaflop speeds, or 1,000 trillion (one quadrillion) sustained floating point operations per second.
The next leap, an exaflop, is 1,000 petaflops.
The DOE is working on a report for Congress that will detail its "Exascale Computing Initiative" (ECI). The report, initially due in February, is expected to spell out a plan and cost for building an exascale system.
William Harrod, research division director in the advanced scientific computing in the DOE Office of Science, previewed the ECI report at the SC12 supercomputing conference held here last week.
"When we started this, [the timetable was] 2018; now it's become 2020 but really it is 2022," said Harrod.
"I have no doubt that somebody out there could put together an exaflop system in the 2018-2020 timeframe, but I don't think it's going to be one that's going to be destined for solving real world applications," said Harrod.
China, Europe and Japan are all working on exascale initiatives, so it's not assured that the U.S. will deliver the first exascale system.
China, in particular, has been investing heavily in large HPC systems and in its own microprocessor and interconnects technologies.
The U.S. set up some strict criteria for its exascale effort.
The system needs to be relatively low power as well as be a platform for a wide range of applications. The government also wants exascale research spending to lead to marketable technologies that can help the IT industry.
The U.S. plan, when delivered to Congress, will call for building two or three prototype systems by 2018. Once a technology approach is proven, the U.S. will order anywhere from one to three exascale systems, said Harrod.
Exascale system development poses a unique set of power, memory, concurrency and resiliency challenges.
Resiliency refers to the ability to keep a massive system, with millions of cores, continuously running despite component failures. "I think resiliency is going to be a great challenge and it really would be nice if the computer would stay up for more than a couple of hours," said Harrod.
The scale of the challenge is evident in the power goals.
The U.S. wants an exascale system that needs no more than 20 megawatts (MW) of power. In contrast, the leading petascale systems in operation today use as much 8 or more MW.
- Money talks, and that's all quantum maker D-Wave has to say
- IBM project aims to forecast and control Beijing's air pollution
- China has the fastest supercomputer, but the U.S. still rules
- ISC: Cray makes Lustre palatable for storage administrators
- SC500: China wins a slowing supercomputer race
- Fujitsu 56 Gbps circuit doubles communication speeds between CPUs
- HP enters supercomputing market with water-cooled Apollo system
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- Considerations For Effective Software License Management For many reasons, software license management has become a critical issue for many IT organizations and enterprise's alike. With many licensing options, hurdles...
- eBay uses 100% OpenSource WSO2 ESB to process more than 1Billion transactions a day Along with eBay's success comes a huge demand to ensure reliable, 24x7 availability of the services that enable these transactions. For eBay, that...
- A Reference Architecture for the Internet of Things The aim of this is to provide Architects and Developers of IoT projects with an effective starting point that covers the major requirements...
- REST easy: API Design, Evolution and Connection RESTful design increases API performance, reduces development effort, and minimizes operational support burden. By following a few best practices and selecting RESTful tooling,...
- Why do you need an enterprise mobile platform? Today companies must offer great apps that run on a range of devices, and connect to an exploding set of backend data. Appcelerator... All High Performance Computing White Papers | Webcasts