Intel's MIC processor finds a big customer in Texas
Supercomputer may reach 15 petaflops
Computerworld - Intel's forthcoming MIC processor will be used by the Texas Advanced Computing Center to build a supercomputer with a peak performance of 10 petaflops that will eventually be upgraded to "at least" 15 petaflops.
The system will include a combination of eight-core Intel Xeon chips, which will supply two petaflops of compute capacity, and chips based on the MIC (Many Integrated Core) architecture. The highly parallel MIC processors will provide an additional eight petaflops of performance to the Texas system, code-named "Stampede."
"This is definitely the first serious appearance of MIC in the marketplace," said Steve Conway, an analyst of high-performance computing at research firm IDC.
The Texas Advanced Computing Center, located at the University of Texas at Austin, is "immediately" getting $27.5 million from the National Science Foundation (NSF) to build the system, which is expected to be running by January 2013.
The estimated federal investment over a four-year period will be $50 million. That includes plans to add future generations of MIC chips, bringing the compute capacity to 15 petaflops, or 15,000 trillion calculations per second. NSF-funded computers are available to scientists to do a wide range of research in areas such as climate, energy, processor improvements and even the spread of diseases.
The supercomputer will also be comprised of several thousand Dell "Zeus" servers.
The MIC chip that the Texas Advanced Computing Center will be using is code-named "Knights Corner," a co-processor designed for highly parallel workloads. It may have more than 50 cores. Knights Corner is competing with Nvidia GPUs -- both can be used to help speed up processor power.
Nathan Brookwood, an analyst at Insight 64, said Nvidia is in the catbird seat for people looking for massively parallel types of systems, but the Intel chip has been designed to be amendable to x86 programming environments.
"It's easier to move code over because you do have the x86 compatibility and standard Intel compilers," Brookwood said. But on the other hand, a lot of code that runs in supercomputing environments is adapted to OpenCL, which Intel will support as well, he said.
Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at @DCgov or subscribe to Patrick's RSS feed . His e-mail address is firstname.lastname@example.org.
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- D-Wave pitches quantum co-acceleration to supercomputing set
- Why the U.S. may lose the race to exascale
- Top500 shows growing inequality in supercomputing power
- Supercomputing's big problem: What's after silicon?
- Cray brings Hadoop to supercomputing
- Intel rushes to exascale with redesigned Knights Landing chip
- China still has the fastest supercomputer in the world
Read more about High Performance Computing in Computerworld's High Performance Computing Topic Center.
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- Path Selection Infographic Path Selection Infographic
- Hyperconvergence Infographic A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Preparing Your Infrastructure for the Hyperconvergence Era From cloud computing and virtualization to mobility and unified communications, an array of innovative technologies is transforming today's data centers.
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users? All High Performance Computing White Papers | Webcasts