U.S. to spend millions on massive, ultrafast supercomputers
Forget TFLOPS; PFLOPS of computing power are on the way
Computerworld - The U.S. government is planning to spend hundreds of millions of dollars over the next several years to develop huge supercomputers with power beyond anything available today. The aim is to address the most challenging problems facing science, as well national security and industry.
Once completed, these systems will be capable of sustained petascale computing speeds, which are equal to quadrillions of calculations per second. To understand the scale of these planned systems, the leading machines on the current Top500 Supercomputer List are capable of reaching the range of only multiple TFLOPS (trillion floating-point operations per second). The latest Top500 list, updated twice a year, is due out tomorrow.
But PFLOPS (or "petaflop") systems are coming. Earlier this month, Seattle-based Cray Inc. said it had signed a contract worth $200 million to deliver a PFLOPS-capable system to the U.S. Department of Energy's (DOE) Oak Ridge National Laboratory. That system, based on Advanced Micro Devices Inc. processors, will be built in phases of ever-increasing speeds, and is due to be completed in 2008.
The National Science Foundation (NSF) this month began seeking proposals for a supercomputer that could cost as much as $200 million. And in July, the Defense Advanced Research Projects Agency (DARPA), which was responsible for creating the Internet, will award two supercomputer development projects expected to cost several hundred million dollars.
The scale of the computing power on its way will be so enormous that "we have to change the way we do computational science to really take advantage of these machines," said Dimitri Kusnezov, head of the DOE's advanced simulation and computing program, which operates the world's most powerful supercomputer, the IBM BlueGene/L. That supercomputer, with more than 131,000 IBM Power processors, was the No. 1 system on the Top 500 list when those rankings were last updated in November.
This DOE BlueGene system broke a record this month when it ran scientific code, called Qbox, at a sustained level of 207 TFLOPS. While the system benchmarks higher on test codes, achieving high levels of performance with a real-world application is a more difficult task because of complexity and size of the code, according to those involved with the project.
But Kusnezov said that when he considers the performance of future systems, including an IBM system built of 250,000 processors, their capabilities will challenge scientists.
"The question is what they would do with an infinite amount of computing speed," said Kusnezov, referring to scientists. "What would they calculate? And I'll wager that they don't have an answer for you. Because people think about their problems within the constraints of what they think they can calculate, and once you remove that constraint, people are lost."
- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
If you use ‘password,’ one the worst passwords, as your password, fail to keep antivirus protection updated and don’t bother to deploy security patches to close critical vulnerabilities, then maybe you should consider working for the cybersecurity-clueless federal government; you’d fit right in, according to Senator Tom Coburn's cybersecurity and critical infrastructure report.
- IT Certification Study Tips
- Register for this Computerworld Insider Study Tip guide and gain access to hundreds of premium content articles, cheat sheets, product reviews and more.
- Changing the Way Government Works: Four Technology Trends that Drive Down Costs and Increase Productivity
- This paper discusses four technology-based approaches to improving processes and increasing
productivity while driving down department and agency costs.
- Is Your Big Data Solution Production-Ready?
- Read "Is Your Big Data Solution Production-Ready?" now, and discover best practices and actionable steps to implementing a production-ready big data solution.
- Pay-as-you-Grow Data Protection: IBM Tivoli's Full-featured Data Protection Suite for Small to Medium Businesses
- IBM Tivoli Storage Manager Suite for Unified Recovery gives small and medium businesses the opportunity to start out with only the individual solutions...
- Streamline Data Protection with IBM Tivoli Storage Manager Operations Center
- IBM Tivoli Storage Manager (TSM) has been an industry-standard data protection solution for two decades. But, where most competitors focus exclusively on Backup...
- Simplify and Consolidate Data Protection for Better Business Results
- Learn about IBM® Tivoli® Storage Manager Operations Center, which provides advanced visualization, built-in analytics and integrated workflow automation features that leapfrog traditional backup... All Government IT White Papers
- Webinar: Building a Big Data solution that's production-ready Big data solutions are no longer just a nice-to-have.
- Meg Whitman presents Unlocking IT with Big Data During this Web Event you will hear Meg Whitman, President and CEO, HP discuss HAVEn - the #1 Big Data platform, as well...
- The New Way to Work Knowledge Vault This Knowledge Vault focuses on how, in today's increasingly virtual world, it's more important than ever to engage deeply with employees, suppliers, partners,...
- Getting Ready for BlackBerry Enterprise Service 10.2 Find out how BlackBerry® Enterprise Service 10 helps organizations address the full spectrum of EMM challenges, while balancing the needs of both the...
- Containerization Options: How to Choose the Best DLP Solution for Your Organization This webcast outlines a framework for making the right choice when it comes to containerization approaches, along with the pros and cons of...
- All Government IT Webcasts