Researchers: Databases still beat Google's MapReduce

Speed, efficiency of parallel SQL databases superior, paper shows

A team of researchers will release on Tuesday a paper showing that parallel SQL databases perform up to 6.5 times faster than Google Inc.'s MapReduce data-crunching technology.

Google bypassed parallel databases and invented MapReduce as a way to index the World Wide Web on its global grid of low-end PC servers. As of January 2008, Google has used MapReduce to process 20 petabytes of data a day.

In results of in-house tests published last November, Google used MapReduce running on 1,000 servers to sort 1TB of data in just 68 seconds.

Such results have won MapReduce and its open-source version Hadoop many fans, who argue that the technology is already superior to the 40-year-old relational one for large-scale grids such as for cloud-computing infrastructures, and will eventually render databases obsolete for other tasks.

Microsoft technical fellow David DeWitt and Michael Stonebraker, a database industry legend and chief technology officer at Vertica Systems Inc., who co-authored the paper, have previously argued that MapReduce lacks many key features already standard to databases and was generally a "major step backward."

The paper, titled "A Comparison of Approaches to Large-Scale Data Analysis," viewable here. It is sure to stoke heated discussion among data junkies over the technical merits of each approach. It will be published by the Association for Computing Machinery (ACM), a 92,000-member IT society, in the June 29-July 2 issue of its SIGMOD Record journal of data management.

In addition to DeWitt and Stonebraker, five researchers from Brown University, Yale University, MIT and the University of Wisconsin co-authored the report.

In the paper, DeWitt and Stonebraker put meat on their argument by testing two 100-node parallel, "shared-nothing" database clusters, one running the column-based Vertica and another running a row-based database from "a major relational vendor," against a similarly configured MapReduce one of the same size. Servers had 2.4-GHz Intel Core 2 Duo processors running 64-bit Red Hat Enterprise Linux with 4GB of RAM and two 250GB SATA-I hard drives all connected by Gigabit Ethernet ports.

Their conclusion? Databases "were significantly faster and required less code to implement each task, but took longer to tune and load the data," the researchers write. Database clusters were between 3.1 and 6.5 times faster on a "variety of analytic tasks."

MapReduce also requires developers to write features or perform tasks manually that can be done automatically by most SQL databases, they wrote.

MapReduce may be "well suited for development environments with a small number of programmers and a limited application domain," they said. "This lack of constraints, however, may not be appropriate for longer-term and larger-sized projects."

Database industry analyst Curt Monash agreed with the results. "The results are pretty clear in favor of databases," Monash said. "Databases are more mature products."

The researchers note about a dozen parallel database vendors, including Teradata, Aster Data, Netezza, DATAllegro (now Microsoft), Dataupia, Vertica, ParAccel, Hewlett-Packard, Greenplum, IBM and Oracle.

The results reinforced Monash's belief that MapReduce was superior only for limited kinds of tasks, such as the text indexing and searching Google does, or data mining, he said.

Otherwise, "using MapReduce makes sense for most organizations only when it would otherwise be awkward to use a SQL database," he said.

The researchers did allow that parallel databases, which can be set up in large-scale grids that crunch hundreds of terabytes or even petabytes of data, were "much more challenging" than Hadoop to install and configure properly. Loading data into MapReduce or Hadoop was also three times faster than into Vertica, and 20 times faster than the unnamed database, they wrote.

The researchers defend basing their tests on 100-server clusters, rather than the 1,000 server clusters used by Google. "The superior efficiency of modern [databases] alleviates the need to use such massive hardware on data sets in the range of 1-2 PB," they wrote. "Since few data sets in the world even approach a petabyte in size, it is not at all clear how many MapReduce users really need 1,000 nodes."

Join the discussion
Be the first to comment on this article. Our Commenting Policies