Open source MongoDB gets richer query commands

More sophicated MongoDB queries could dramatically improve system performance, 10gen claims

In an effort to improve how MongoDB supplies its data to external applications, MongoDB keeper 10gen has extended the open source data store's query language, providing developers with more sophisticated ways to extract and transform data.

"We've focused on making it easier for developers to write code against MongoDB," said Jared Rosoff, 10Gen's product marketing liaison, of the new release. "We found that a lot of people were have trouble doing things that are relatively simple in SQL."

The fresh release of MongoDB 2.2 includes a new batch of query operators and expressions, as well as a pipeline-processing framework that will allow MongoDB to process data itself in serial multistep procedures. 10gen calls this collection of technologies a real-time aggregation framework.

MongoDB 2.2 has also been outfitted with a new locking mechanism and can now tag individual database shards as well.

Overall, this release -- the first major one since MongoDB 2.0 a year ago -- has gotten 600 new tweaks. (Production ready releases of MongoDB are even numbered, while the odd number versions, such as 2.1, are for internal testing and development.)

MongoDB, like most NoSQL data storage software, has been criticized for offering only simple methods of retrieving data, compared to the rich set of commands SQL databases provide. With prior versions of MongoDB, any processing against a set of queried records had to be done by external application programming, or through the MapReduce processing framework, which was not the best fit for the data store for various reasons, 10Gen and outside observers have noted.

Designed as a simple distributed document store, MongoDB was designed to store the complicated application states commonly found in busy online transactional systems. Disney, The New York Times, Eventbrite, Badgeville, Foursquare and other popular Internet services deploy MongoDB to store their user and operational data. Data is stored in the JSON (JavaScript Object Notation) data interchange format.

The new aggregation framework adds additional MongoDB queries, allowing more data processing to be done by MongoDB itself, which could be a time saver.

"If I want to compute an average sales price from all the objects inside a terabyte of data, the alternative would be to extract a terabyte of data into my application and construct the average myself," Rosoff said. "With the application framework, I can construct a pipeline that will run the inside the database and I'll get back an average. I don't need to transfer a terabyte of data."

The new operators will allow developers to sort and aggregate queried data into different groups and run operations against this data. A new set of mathematical expressions can add, subtract, multiply and perform other simple calculations. A set of logical operators can create user-defined computed fields. Other expressions can work on strings and date and time data. The framework provides a Unix-like pipes capability, which developers and administrators can use to build a chains of commands to filter and sort data results.

To build a query, "the client constructs a JSON document that represents a pipeline that can be sent to the database," Rosoff said.

Beyond the aggregation framework, MongoDB comes with a number of other significant new features. One is tag-aware sharding, which will allow organizations to distinguish different groups of nodes, such as all the nodes that run in one data center, or all the nodes that are saving replicas. A new locking mechanism could help performance, given that entire data store is now not locked for each single read or write operation.

One organization looking forward to the new release is Badgeville, which uses the data store for the online awards service it runs for its customers. Badgeville Chief Technology Officer Wedge Martin anticipates using the aggregation framework. "It is a nicely written framework to pull out data and do aggregation instead of running MapReduce jobs," Martin said.

The tagging feature may also prove useful to the Badgeville. For instance, replica nodes can be identified so that computationally intensive analytic queries can run only against these nodes, eliminating the load such jobs would put on the first line production nodes. "You never impact production performance," Martin said. Geographically based load balancing can also be done using a set of tags as well.

"We've been very happy with MongoDB," Martin said.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is

Copyright © 2012 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon