Researchers: Databases still beat Google's MapReduce
Speed, efficiency of parallel SQL databases superior, paper shows
Computerworld - A team of researchers will release on Tuesday a paper showing that parallel SQL databases perform up to 6.5 times faster than Google Inc.'s MapReduce data-crunching technology.
Google bypassed parallel databases and invented MapReduce as a way to index the World Wide Web on its global grid of low-end PC servers. As of January 2008, Google has used MapReduce to process 20 petabytes of data a day.
In results of in-house tests published last November, Google used MapReduce running on 1,000 servers to sort 1TB of data in just 68 seconds.
Such results have won MapReduce and its open-source version Hadoop many fans, who argue that the technology is already superior to the 40-year-old relational one for large-scale grids such as for cloud-computing infrastructures, and will eventually render databases obsolete for other tasks.
Microsoft technical fellow David DeWitt and Michael Stonebraker, a database industry legend and chief technology officer at Vertica Systems Inc., who co-authored the paper, have previously argued that MapReduce lacks many key features already standard to databases and was generally a "major step backward."
The paper, titled "A Comparison of Approaches to Large-Scale Data Analysis," viewable here. It is sure to stoke heated discussion among data junkies over the technical merits of each approach. It will be published by the Association for Computing Machinery (ACM), a 92,000-member IT society, in the June 29-July 2 issue of its SIGMOD Record journal of data management.
In addition to DeWitt and Stonebraker, five researchers from Brown University, Yale University, MIT and the University of Wisconsin co-authored the report.
In the paper, DeWitt and Stonebraker put meat on their argument by testing two 100-node parallel, "shared-nothing" database clusters, one running the column-based Vertica and another running a row-based database from "a major relational vendor," against a similarly configured MapReduce one of the same size. Servers had 2.4-GHz Intel Core 2 Duo processors running 64-bit Red Hat Enterprise Linux with 4GB of RAM and two 250GB SATA-I hard drives all connected by Gigabit Ethernet ports.
Their conclusion? Databases "were significantly faster and required less code to implement each task, but took longer to tune and load the data," the researchers write. Database clusters were between 3.1 and 6.5 times faster on a "variety of analytic tasks."
MapReduce also requires developers to write features or perform tasks manually that can be done automatically by most SQL databases, they wrote.
MapReduce may be "well suited for development environments with a small number of programmers and a limited application domain," they said. "This lack of constraints, however, may not be appropriate for longer-term and larger-sized projects."
- Patient Portals: A Platform for Connecting Communities of Care Connecting patient health data across the care continuum is essential to achieve improved care, increased access to personal health records and lowered costs.
- Aberdeen Group: Marketing Analytics for Manufacturing: Forging Customer Insights There are no recalls for poor marketing. Manufacturers need to get their customer intelligence and messaging right the first time. Learn how.
- Path Selection Infographic Path Selection Infographic
- Hyperconvergence Infographic A wide range of observers agree that data centers are now entering an era of "hyperconvergence" that will raise network traffic levels faster...
- Cloud Knowledge Vault Learn how your organization can benefit from the scalability, flexibility, and performance that the cloud offers through the short videos and other resources...
- LIVE EVENT: 5/7, The End of Data Protection As We Know It. Introducing a Next Generation Data Protection Architecture. Traditional backup is going away, but where does this leave end-users? All Databases White Papers | Webcasts