Japanese supercomputer gets faster but draws no more power
IDG News Service - Tokyo Institute of Technology's newest supercomputer, Tsubame 2.0, proves that high-power computing can go hand-in-hand with energy efficiency. The new computer, which was inaugurated last week, is the second most energy-efficient supercomputer in the world and that's thanks to an administrator who was more concerned with the monthly electricity bill than the cost of the hardware.
The latest machine has a performance of 2.4PFlops, which is 15 times that of its predecessor. It's Japan's first petaflop-level supercomputer and was ranked the fourth most powerful machine in the world in November's Top 500 supercomputer ranking.
Like the previous machine, Tsubame 2.0 runs a mix of CPUs and graphics processors (GPUs). GPUs are good at quickly performing the same computation on large amounts of data, so are much more efficient than CPUs at tackling problems of molecular dynamics, physics simulations and image processing. They also help the machine draw less power.
"Our CIO said, you guys can build a great machine, but you're not going to get any more electricity," said Satoshi Matsuoka, director of the Global Scientific Information and Computing Center at the university, of a discussion he had during the planning stages for Tsubame 2.0. The university was already spending around US$1.5 million per year powering the existing supercomputer and didn't want to see that figure rise.
"It wasn't the money, it wasn't the space, it wasn't our knowledge or capability, it was the power that basically was the limiter," he said.
Matsuoka took his specification and design to Hewlett-Packard, Nvidia and other companies that would help build the machine.
"We were talking about the requirements for this new system that was to be built," said Edward Turkel, marketing lead at HP's industry standard servers group, of a meeting with Matsuoka during the International Supercomputing Conference in 2009.
"Of course, it was going to be very high performance, multiple petaflops peak, over a petaflop sustained performance but oh by the way, it had to fit in a very small datacenter and use remarkably little power," said Turkel. "We all kind of scratched our heads and said 'This is going to be interesting.'"
HP had already been working with Nvidia on designing a GPU-based high-performance server when Matsuoka presented Tokyo Tech's requirements, said Turkel. As a result of the specification, the design was refined to meet both power and space requirements.
The result is a supercomputer comprised of 1,408 computing nodes. At the heart of each node is an HP ProLiant SL390 server with Intel Xeon processor and Nvidia Tesla GPUs. There are three Tesla chips inside each of Tsubame's 1,408 nodes and each chip has 448 processing cores for a total of almost 1.9 million graphics processing cores. It's these GPUs that give Tsubame most of its power.
The machine ranked fourth in the Top 500 with a sustained maximum performance of 1.2 Teraflops (A teraflop represents a trillion floating point operations per second) and second in the Green 500 with an energy-efficiency of 958 Megaflops per watt. It was the only computer to feature in the top five of both rankings.
With Tsubame 2.0 now built and online, the university has opened access to companies and organizations that wish to use some of its capacity. Computing time on the machine can be purchased via the university's website.
- Money talks, and that's all quantum maker D-Wave has to say
- IBM project aims to forecast and control Beijing's air pollution
- China has the fastest supercomputer, but the U.S. still rules
- ISC: Cray makes Lustre palatable for storage administrators
- SC500: China wins a slowing supercomputer race
- Fujitsu 56 Gbps circuit doubles communication speeds between CPUs
- HP enters supercomputing market with water-cooled Apollo system
- In exascale, Japan stands apart with firm delivery plan
- Here comes a supercomputing app store
- An HPC champion helps Trek Bicycle shift gears
- Datacenter eGuide Read on to learn what technologies are essential for high-performing data centers today, and to get a glimpse of what the data center...
- Cloud-to-Cloud Backup Case Study: AMAG Pharmaceuticals As an IT pioneer in the pharmaceuticals industry, AMAG realized that SaaS backup and recovery would give them the confidence to fully embrace...
- Workload Change: The 70 Percent of Your Business DevOps Forgot Adding WLA early in the development process ensures that the benefits of DevOps accrue for all applications, including your batch services. This paper...
- Oracle EBS and RAC Performance & Elasticity on Vblock™ Systems VCE Vblock™ Systems provide simple, low-risk solutions for migrating from a physical to virtual environment.
- The Key to Happiness: Throw out Your Data Warehouse In this webinar, Kerry Reitnauer, Director, Solution Architect at FairPoint Communications will discuss the challenges the data warehouse brought, how they migrated to...
- Building Tomorrow's Data Center with Converged Technologies A number of forces are converging: the cloud, converged infrastructure, big data and fabric architectures to name a few. All Data Center White Papers | Webcasts
Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!