U.S. Army Laboratory Makes Major Linux Computing Cluster Move
Purchase will more than double the MSRC's computing capability
Computerworld - A U.S. Army supercomputing center with a legacy that dates back to the Electronic Numerical Integrator and Computer (ENIAC) launched in 1946 is moving to Linux-based clusters that will more than double its computing capability.
The Army Research Laboratory Major Shared Resource Center (MSRC) in Aberdeen, Md., is buying four Linux Networx Inc. Advanced Technology Clusters, including a system with 4,488 processing cores, or 1,122 nodes, with each node made up of two dual-core Intel Xeon chips. A second system has 842 nodes. In total, the purchase will increase the MSRC's computing capability from 36 trillion floating-point operations per second to more than 80 TFLOPS, Army officials said.
The decision to move into commodity clusters was not made quickly, said Charles J. Nietubicz, director of the MSRC.
The lab held a symposium in 2003 to explore the issue and began running a small, 256-processor cluster system. "We saw that cluster computing was this new kid on the block and was interesting," said Nietubicz. But the center wasn't about to start scrapping its other systems, made by Silicon Graphics Inc., Sun Microsystems Inc. and IBM, he said.
The MSRC isn't disclosing the purchase price, but Earl Joseph, an analyst at IDC in Framingham, Mass., said the average cost for a cluster works out to about $2,000 per processor, compared with $12,000 per processor for a RISC-based system.
Nietubicz said other vendors will need to improve their systems' performance or "reduce the price to provide equivalent performance."
Bluffdale, Utah-based Linux Networx builds systems using Advanced Micro Devices Inc. and Intel Corp. chips. The MSRC sale is the vendor's largest supercomputing order ever.
Nietubicz said he was convinced that clusters can work based on the MSRC's ability to get certain computational codes used in fluid dynamics, structural mechanics and other processes to scale to multiple processors mostly by using Message Passing Interface protocol-based code. MPI is used to create parallel applications. Clusters accounted for about half of the total $9.1 billion in sales in the high-performance computing market last year, according to IDC.
A major consideration for moving to clusters is whether the high-performance software can scale to multiple processors. Systems that have been written in MPI can do so, but Joseph said companies that use off-the-shelf software usually find it difficult to make the switch because commercial applications don't use MPI. Government labs and universities, which own their own code, can usually invest the time to convert their code into MPI, he said. Nietubicz doesn't see any major limitations to clusters, and while not all code can scale on clusters, he said the same problems occurred as the center moved from vector to shared memory.
Read more about Hardware in Computerworld's Hardware Topic Center.
- An Insightful Approach to Optimizing Mainframe MLC Spend This paper discusses how you can penetrate the complexity of IBM mainframe MLC products and the MLC price model to gain insight into...
- Meeting the Exploding Demand for New IT Services In this eBook, explore the top trends driving the New IT for IT Service Management, and how leading organizations are evolving to focus...
- Hybrid IT-A Low-Risk Path from On-Premise to ITaaS This white paper provides a strategy to move part or all of your ITSM suite to the cloud as a stepping stone to...
- Paving the Windows XP Migration Path to Success Support for Windows XP has ended, leaving organizations with three choices: Windows 8, Windows 7 or a combination. With the right planning and...
- Increase Your Data Center IQ Discover how to improve network efficiency, lower IT costs and more proactively manage your physical, virtual and cloud environments.
- Optimize Data Center Resources and Plan for the Future Eliminate over-provisioning and capacity shortfalls with pro-active capacity optimization. Join us in the evolution from capacity monitoring to capacity optimization in your data... All Hardware White Papers | Webcasts