Obama sets $126M for next-gen supercomputing
'Exascale' arrives for first time in federal budget
Computerworld - WASHINGTON -- President Barack Obama has included funding in his 2012 budget proposal for development of the next generation of supercomputers, an exascale system.
The money is going to the U.S. Department of Energy, which has led in developing the world's fastest computers.
If Congress approves Obama's request, DOE will get $126 million for exascale development, with about $91 million for the DOE's Office of Science and $36 million for the National Nuclear Security Administration.
In seeking this funding, the Obama administration made a little history. A DOE spokesman said it marked the first time that the budget explicitly references "exascale." The DOE had budgeted just over $24 million in 2011 in context of "extreme scale" computing.
Exascale systems are 1,000 times more powerful than the Tianhe-1A, the Chinese supercomputer that was recently ranked as the world's fastest.
The exascale funding is part of an overall DOE advanced-computing request for next year of $465 million, which represents a 21% increase over the 2010 budget, a two-year increase.
The White House isn't comparing spending to the 2011 budget because Congress, for now, is funding the government through Continuing Resolutions, which could change the budget amount for this year. The current funding resolution expires March 4.
In setting aside money for exascale computing, the White House is planning for a predictable future in high-performance computing. Every 10 or 11 years, high-performance computing crosses a barrier, thanks largely to improvements in chip performance.
In 1997, ASCI Red, a computer at DOE's Sandia National Labs, achieved 1.3 teraflops, or one trillion sustained floating-point operations per second. In 2008, IBM's Roadrunner, at DOE's Los Alamos National Laboratory, was the first system to reach one petaflop, capable of more than one thousand trillion (one quadrillion) operations per second.
An exaflop is a million trillion calculations per second, or a quintillion, and is a thousand times faster than a petaflop.
The development of an exascale system is estimated to happen in the 2018-2020 time frame, but it is also contingent on the development of software systems that can utilize what may be 100 million cores.
Supercomputers are used for modeling and simulation, and the larger the systems, the higher the resolution. An exascale system, for instance, may be able to simulate the workings of an entire human cell as well as improve forecasting and understanding of climate change.
Also, the advances needed to build these systems, such as faster networking, may ultimately find their way into business-class servers.
The DOE has not yet said how exascale funding will be used, but the supercomputing research community has active research efforts under way. In the interim, DOE is now building 10 petaflop systems, such as the recently announced IBM system planned at Argonne National Laboratory.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed . His e-mail address is firstname.lastname@example.org.
- Fujitsu guns for faster supercomputers with new chip
- Money talks, and that's all quantum maker D-Wave has to say
- IBM project aims to forecast and control Beijing's air pollution
- China has the fastest supercomputer, but the U.S. still rules
- ISC: Cray makes Lustre palatable for storage administrators
- SC500: China wins a slowing supercomputer race
- Fujitsu 56 Gbps circuit doubles communication speeds between CPUs
- HP enters supercomputing market with water-cooled Apollo system
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Capabilities You Need in an IP Address Management Solution A mismanaged IP space can cripple an otherwise healthy network. Take a moment to understand what you need in an enterprise-ready IPAM solution.
- IPv6 Fundamentals IPv6 is needed to sustain the growth of the Internet. The transition from IPv4 will require planning and likely some degree of support...
- Fixing Intermittent Performance Problems Intermittent performance problems are among the most frustrating and time-consuming issues IT administrators face. Read this white paper and learn how technology advances...
- 3G/4G Digital Signage Guide Today, the widespread availability of 3G and 4G cellular or wireless broadband networks enables digital signage to be deployed virtually anywhere.
- Live Webcast 5 Steps to Assuring Quality of Experience In order to align monitoring and management practices with the true demands of the business, IT professionals must expand beyond traditional comfort zones...
- Live Webcast Master the Changing SAP Landscape with Performance Management SAP landscapes are not getting simpler. Gradually, business processes that used to be contained on a single SAP system now involve a range...
- Navigating the New Wireless Landscape Thriving in the new wireless landscape View Now>>
- Deep Dive into Advanced Networking and Security with Hybrid Cloud Security and networking are among the top concerns when moving workloads to the cloud. VMware vCloud® Hybrid Service™ enables you to extend your... All Networking White Papers | Webcasts
Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!