World's most powerful big data machines charted on Graph 500
Currently, IBM's BlueGene/Q systems dominate this edition of the Graph 500. Nine out of the top 10 systems on the list are BlueGene/Q models -- compared to four BlueGene/Q systems on the November 2011 compilation. For Bader, this is proof that IBM is becoming more sensitive to current data processing needs. IBM's previous BlueGene system, BlueGene/L, was geared more towards floating point operations, and does not score as highly on the list.
Like the Top500, each successive edition of the Graph 500 shows steady performance gains among its participants. The top machine on the new list, Sequoia, traversed 15,363 billion edges per second. In contrast, the top entrant of the first list, compiled in 2010, followed only 7 billion edges per second. This jump of four orders of magnitude is "staggering," Bader said.
The Graph500 list is compiled twice a year, and, like the Top500, the results are announced at the Supercomputing conference, usually held in November, or the International Supercomputing Conference, usually held in June. Participation is voluntary: entrants will run either the reference implementations, or their own implementations, of the benchmark and submit the results.
Despite its name, the Graph 500 has yet to attract 500 submissions, though the numbers are improving with each edition. The first contest garnered 9 participants, and this latest edition has 124 entrants.
Bader is quick to point out that the Graph 500 is not a replacement for the Top500 but rather a complementary benchmark. Still, the data intensive benchmark could help answer some of the criticisms around the Top500's use of the Linpack benchmark.
Jack Dongarra, who helped create Linpack and now maintains the Top500, admitted during a discussion about the latest results of the Top500 at SC12 that Linpack does not measure all aspects of a computer's performance. He pointed to projects like Graph 500, the Green500 and the HPC Challenge that measure other aspects of supercomputer performance.
At least one system, the National Center for Supercomputing Applications' Blue Waters, was not entered in the Top500, because its keepers did not feel Linpack would adequately convey the true power of the machine.
Supercomputers are built according to the jobs they will execute, not to an arbitrary benchmark, Bader pointed out.
"At the end of the day, you are going to want the machine that does best for your workload," Bader said.
- Case Study: Murphy USA Gains Application Visibility Without Agents Murphy USA has more than 700 stores that share a 10Mbps VSAT link. So when something goes wrong with their applications, it's the...
- Path Selection Infographic Path Selection Infographic
- Hyperconvergence Infographic A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Preparing Your Infrastructure for the Hyperconvergence Era From cloud computing and virtualization to mobility and unified communications, an array of innovative technologies is transforming today's data centers.
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users? All High Performance Computing White Papers | Webcasts