Japanese supercomputer gets faster but draws no more power
IDG News Service - Tokyo Institute of Technology's newest supercomputer, Tsubame 2.0, proves that high-power computing can go hand-in-hand with energy efficiency. The new computer, which was inaugurated last week, is the second most energy-efficient supercomputer in the world and that's thanks to an administrator who was more concerned with the monthly electricity bill than the cost of the hardware.
The latest machine has a performance of 2.4PFlops, which is 15 times that of its predecessor. It's Japan's first petaflop-level supercomputer and was ranked the fourth most powerful machine in the world in November's Top 500 supercomputer ranking.
Like the previous machine, Tsubame 2.0 runs a mix of CPUs and graphics processors (GPUs). GPUs are good at quickly performing the same computation on large amounts of data, so are much more efficient than CPUs at tackling problems of molecular dynamics, physics simulations and image processing. They also help the machine draw less power.
"Our CIO said, you guys can build a great machine, but you're not going to get any more electricity," said Satoshi Matsuoka, director of the Global Scientific Information and Computing Center at the university, of a discussion he had during the planning stages for Tsubame 2.0. The university was already spending around US$1.5 million per year powering the existing supercomputer and didn't want to see that figure rise.
"It wasn't the money, it wasn't the space, it wasn't our knowledge or capability, it was the power that basically was the limiter," he said.
Matsuoka took his specification and design to Hewlett-Packard, Nvidia and other companies that would help build the machine.
"We were talking about the requirements for this new system that was to be built," said Edward Turkel, marketing lead at HP's industry standard servers group, of a meeting with Matsuoka during the International Supercomputing Conference in 2009.
"Of course, it was going to be very high performance, multiple petaflops peak, over a petaflop sustained performance but oh by the way, it had to fit in a very small datacenter and use remarkably little power," said Turkel. "We all kind of scratched our heads and said 'This is going to be interesting.'"
HP had already been working with Nvidia on designing a GPU-based high-performance server when Matsuoka presented Tokyo Tech's requirements, said Turkel. As a result of the specification, the design was refined to meet both power and space requirements.
The result is a supercomputer comprised of 1,408 computing nodes. At the heart of each node is an HP ProLiant SL390 server with Intel Xeon processor and Nvidia Tesla GPUs. There are three Tesla chips inside each of Tsubame's 1,408 nodes and each chip has 448 processing cores for a total of almost 1.9 million graphics processing cores. It's these GPUs that give Tsubame most of its power.
The machine ranked fourth in the Top 500 with a sustained maximum performance of 1.2 Teraflops (A teraflop represents a trillion floating point operations per second) and second in the Green 500 with an energy-efficiency of 958 Megaflops per watt. It was the only computer to feature in the top five of both rankings.
With Tsubame 2.0 now built and online, the university has opened access to companies and organizations that wish to use some of its capacity. Computing time on the machine can be purchased via the university's website.
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- D-Wave pitches quantum co-acceleration to supercomputing set
- Why the U.S. may lose the race to exascale
- Top500 shows growing inequality in supercomputing power
- Supercomputing's big problem: What's after silicon?
- Cray brings Hadoop to supercomputing
- Intel rushes to exascale with redesigned Knights Landing chip
- China still has the fastest supercomputer in the world
- Hadoop for Dummies Today, organizations in every industry are being showered with imposing quantities of new information. Along with traditional sources, many more data channels and...
- The Top Five Ways to Get Started with Big Data Despite the increased focus on big data over the past few years, most organizations are still talking about what big data is rather...
- Data Warehouse Augmentation: The Queryable Data Store While organizations have, to date, been busy exploring and experimenting, they are now beginning to focus on using big data technologies to solve...
- The IBM Big Data Platform IBM is unique in having developed an enterprise class big data platform that allows you to address the full spectrum of big data...
- Live Webcast Best Practices: How to Improve Business Continuity with Virtualization VMware solutions include a range of business continuity capabilities to help ensure availability for applications across your virtualized environment. Learn More>>
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- Endpoint Data Management: Protecting the Perimeter of the Internet of Things Not surprisingly, "Internet of Things" (IoT) and Big Data present new challenges AND opportunities for enterprise IT. Teams need to harness, secure and... All Data Center White Papers | Webcasts