Army makes major Linux HPC cluster move
Purchase will more than double MSRC's computing capability
Computerworld - A U.S. Army supercomputing center with a legacy that dates to the first large computer, the Electronic Numerical Integrator and Computer (ENIAC) launched in 1946, is moving to Linux-based clusters in a major hardware purchase that will more than double its computing capability.
The Army Research Laboratory Major Shared Resource Center (MSRC)is buying four Linux Networx Inc. Advanced Technology Clusters, including a system with 4,488 processing cores, or 1,122 nodes, with each node made up of two dual-core Intel Xeon chips. A second system has 842 nodes.
In total, this purchase will increase its computing capability from 36 trillion floating-point operations per second (TFLOPS) to more than 80 TFLOPS, Army officials said.
The MSRC, which is based at the Aberdeen Proving Ground in Harford County, Md., has been involved in every aspect of computing technology since its beginning, and this decision to move into commodity clusters was not made quickly, said Charles J. Nietubicz, director of the MSRC.
The lab held a symposium in 2003 to explore the issue and began running a small, 256-processor cluster system. "We saw that cluster computing was this new kid on the block and was interesting," said Nietubicz, but the center wasn't about to start scrapping its other systems made by Silicon Graphics Inc., Sun Microsystems Inc. and IBM, he said.
The MSRC isn't disclosing the purchase price, but Earl Joseph, an analyst at IDC in Framingham, Mass., said the average cost for a cluster works out to about $2,000 per processor compared with $12,000 per processor for a RISC-based system.
Nietubicz said other vendors "are going to have to begin to recognize that either they provide some other kind of performance to try to gain the increased price, or they are going to have to reduce the price to provide equivalent performance."
Bluffdale, Utah-based Linux Networx builds systems using Advanced Micro Devices Inc. and Intel Corp. chips. In addition to the four systems sold to the MSRC, it also sold one to the Dugway Proving Ground. In total, the sale of the five systems is the company's largest supercomputing order ever. The sale was announced today.
Nietubicz said he was convinced that clusters can work based on the MSRC's ability to get certain computational codes used in fluid dynamics, structural mechanics and other processes to scale to multiple processors mostly by using Message Passing Interface (MPI) protocol-based code. MPI is used to create parallel applications.
The major competitor to supercomputing clusters and their distributed memory systems is symmetric multiprocessing, or SMP, a shared-memory system primarily used in RISC-based systems. Of the total$9.1 billion high-performance computing market last year, clusters accounted for about half of the sales, according to IDC.
A major limitation for moving to clusters is whether the high-performance software can scale to multiple processors. Systems that have been written in MPI can do so, but Joseph said it's difficult to accomplish for companies since many off-the-shelf software packages don't use MPI. Government labs and universities, which own their own code, can usually invest the time to convert their codes into MPI, he said.
Nietubicz doesn't see any major limitations to clusters, and while not all codes can scale on clusters, he said the same problems occurred as the center moved from vector to shared memory. "In each major transition, there were always people saying, 'I can't use that, I need my old stuff.'"
Read more about Hardware in Computerworld's Hardware Topic Center.
- 15 Non-Certified IT Skills Growing in Demand
- How 19 Tech Titans Target Healthcare
- Twitter Suffering From Growing Pains (and Facebook Comparisons)
- Agile Comes to Data Integration
- Slideshow: 7 security mistakes people make with their mobile device
- iOS vs. Android: Which is more secure?
- 11 sure signs you've been hacked
- 4 Customers who never have to refresh their PCs again This paper illustrates a common theme: the combination of desktop virtualization and thin client computing helps organizations deliver an up-to-date user experience more...
- Mobile Devices: The New Thin Clients Get essential guidance for understanding the role thin clients plus virtual desktops play in the enterprise today.
- Taking Windows Mobile on Any Device Taking Windows applications mobile has many advantages, but the process of identifying a solution is complex. Learn how to solve this complex problem...
- PaaS - Powering a New Era of Business IT Why PaaS has suddenly become relevant and irresistible to many organizations. Dive into the opportunities and considerations associated with using PaaS from an...
- Live Webcast Best Practices for the Hyperconverged Enterprise Network To the Age of Constant Connectivity and Information overload
- Live Webcast On-demand webinar: "Mobility Mayhem: Balancing BYOD with Enterprise Security" Check out this on-demand webinar to hear Sophos senior security expert John Shier deep dive into how BYOD impacts your enterprise security strategy...
- Live Webcast Endpoint Backup & Restore: Protect Everyone, Everywhere Arek Sokol from the bleeding-edge IT team at Genentech/Roche explains how he leverages cross-platform enterprise endpoint backup in the public cloud as part...
- Redefine Your IT Operations: Remote Office IT Has Never Been Simpler Join us to see why PC Pro named Dell PowerEdge VRTX the "2013 Server of the Year." PowerEdge VRTX may be just what...
- Meg Whitman presents Unlocking IT with Big Data During this Web Event you will hear Meg Whitman, President and CEO, HP discuss HAVEn - the #1 Big Data platform, as well...