Exascale unlikely before 2020 due to budget woes
Prototype systems still eyed for 2018, but only if Congress approves billions in funding, say U.S. DOE officials
Computerworld - SALT LAKE CITY, Utah -- The U.S. Dept. of Energy, which builds the world's largest supercomputers, is now targeting 2020 to 2022 for an exascale system, two to four years later than earlier expectations.
The new timeframe assumes that Congress will fund the project in the fiscal 2014 budget. The White House will deliver its budget request to Congress early next year for fiscal 2014, which begins next Oct. 1.
Despite a belief among scientists that exascale systems can help deliver breakthrough scientific breakthroughs, improve U.S. competitiveness and deepen the understanding of problems like climate change, the development effort has so far received limited funding -- nowhere near the billions of dollars likely needed.
Experts had previously expected an exascale system to arrive in 2018. Those expectations were based, in part, on predictable increases in compute power.
In 1997, the ASCI Red supercomputer built by Intel and installed at the Sandia National Laboratory broke the teraflop barrier, or one trillion calculations per second. ASCI Red cost $55 million to build.
By comparison, Intel's just released Phi 60-core co-processor, which is also capable of one teraflop, is priced at $2,649.
In 2008, a decade after ASCI Red debuted, IBM's Roadrunner began operating at Los Alamos National Labs. Roadrunner operated at petaflop speeds, or 1,000 trillion (one quadrillion) sustained floating point operations per second.
The next leap, an exaflop, is 1,000 petaflops.
The DOE is working on a report for Congress that will detail its "Exascale Computing Initiative" (ECI). The report, initially due in February, is expected to spell out a plan and cost for building an exascale system.
William Harrod, research division director in the advanced scientific computing in the DOE Office of Science, previewed the ECI report at the SC12 supercomputing conference held here last week.
"When we started this, [the timetable was] 2018; now it's become 2020 but really it is 2022," said Harrod.
"I have no doubt that somebody out there could put together an exaflop system in the 2018-2020 timeframe, but I don't think it's going to be one that's going to be destined for solving real world applications," said Harrod.
China, Europe and Japan are all working on exascale initiatives, so it's not assured that the U.S. will deliver the first exascale system.
China, in particular, has been investing heavily in large HPC systems and in its own microprocessor and interconnects technologies.
The U.S. set up some strict criteria for its exascale effort.
The system needs to be relatively low power as well as be a platform for a wide range of applications. The government also wants exascale research spending to lead to marketable technologies that can help the IT industry.
The U.S. plan, when delivered to Congress, will call for building two or three prototype systems by 2018. Once a technology approach is proven, the U.S. will order anywhere from one to three exascale systems, said Harrod.
Exascale system development poses a unique set of power, memory, concurrency and resiliency challenges.
Resiliency refers to the ability to keep a massive system, with millions of cores, continuously running despite component failures. "I think resiliency is going to be a great challenge and it really would be nice if the computer would stay up for more than a couple of hours," said Harrod.
The scale of the challenge is evident in the power goals.
The U.S. wants an exascale system that needs no more than 20 megawatts (MW) of power. In contrast, the leading petascale systems in operation today use as much 8 or more MW.
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- D-Wave pitches quantum co-acceleration to supercomputing set
- Why the U.S. may lose the race to exascale
- Top500 shows growing inequality in supercomputing power
- Supercomputing's big problem: What's after silicon?
- Cray brings Hadoop to supercomputing
- Intel rushes to exascale with redesigned Knights Landing chip
- China still has the fastest supercomputer in the world
- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- HP HAVEn: See the big picture in Big Data HP HAVEn is the industry's first comprehensive, scalable, open, and secure platform for Big Data. Enterprises are drowning in a sea of data...
- Piecing Together the Business Intelligence Puzzle Business intelligence (BI) technology collects and analyzes company data, delivering relevant information to corporate decision-makers in an effort to produce favorable outcomes.
- Harness IT -- An Introduction to Business Intelligence Solutions Learn the key selection criteria required to provide your organization with the capability to address structured data, unstructured data and mobile demands so...
- Business Intelligence Shows its Smarts Today's Business Intelligence (BI) tools provide a new way to think about data with self-service capabilities and user-friendly analytics that can be used...
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- Testimonial: Cystic Fibrosis Trust Peter Hawkins, the Head of IT for Cystic Fibrosis Trust, discusses the role CommVault's Simpana software platform plays in improving the company's information... All Data Center White Papers | Webcasts