Skip the navigation

Cassandra 1.2 geared to 'fat servers'

Updated Apache database offers approach to managing data on nodes

By Joab Jackson
January 3, 2013 03:20 PM ET

IDG News Service - Adjusting to changes in corporate hardware buying habits, the Apache Software Foundation's Cassandra NoSQL distributed database has been updated to better use larger servers through the introduction of virtual nodes and configurable policies for disk failure.

The newly released Cassandra 1.2 also features the ability to perform atomic batch operations, and comes with a new version of the Cassandra Query Language, CQL3.

Traditionally, "Cassandra's sweet spot has been in scaling out across a lot of relatively lightweight machines," said Jonathan Ellis, Apache project chair of Cassandra and a co-founder of DataStax, which offers commercial support for the software.

Recently however, more organizations have been buying "denser" servers with more memory and hard drive space because such servers now offer the best price-performance value, Ellis said. So much of the work to update Cassandra has been around better supporting such machines.

First developed internally at Facebook and released to the public in 2008, Cassandra was designed to store massive data sets across multiple servers. Adobe, Cisco, Disney, eBay, IBM, Netflix, Reddit, Spotify, Twitter and Williams-Sonoma all use the technology.

The new version of the software features the ability to create virtual nodes (vnodes), designed to streamline the recovery process should an individual server in a Cassandra cluster fail. Vnodes should also improve performance in general.

Vnodes was one of the chief features of Amazon's Dynamo distributed data store, which the developers of Cassandra used as the model for Cassandra. The developers initially opted for a simpler architecture of assigning one node to each server.

The new virtual node technology should simplify the process of managing clusters, particularly when adding and rebuilding individual nodes. With vnodes, each server can hold multiple nodes.

Because individual vnodes don't have to occupy the entire space of a server, multiple nodes can be created on each server, which then can be re-created more quickly from replicas on other servers should the drive fail, compared to the time it would take to copy the replicated data from a single server.

Smaller nodes spread across a greater number of servers also better balances the workload among all the machines in a cluster.

"Each virtual node is managed by one Java process per machine, so we're not adding a lot of operating system processes. We're just virtualizing out that storage responsibility," Ellis said.

Another new feature, Atomic batching, should help organizations that require transactional integrity across business processes, such as an online merchant that needs to make sure orders are captured even when a component such as a hard drive fails right in the middle of a transaction. Previously, developers would have to build processes, such as retry mechanisms, into their code to guarantee transactional integrity.

Reprinted with permission from IDG.net. Story copyright 2014 International Data Group. All rights reserved.
Our Commenting Policies
Internet of Things: Get the latest!
Internet of Things

Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!