Skip the navigation

Opinion: Demystifying de-duplication

De-duplication can be applied anywhere there is a significant amount of data commonality

By Jim Damoulakis
August 6, 2008 12:00 PM ET

Computerworld - Of the assortment of technologies swarming around the storage and data-protection space these days, one that can be counted on to garner both lots of interest and lots of questions among users is de-duplication. The interest is understandable, since the potential value proposition, in terms of reduction of required storage capacity, is at least conceptually on a par with the ROI of server virtualization. The win-win proposition of providing better services (e.g., disk-based recovery) while reducing costs is undeniably attractive.

However, while the benefits are obvious, the road to get there isn't necessarily as clear. How does one make a decision to adopt a particular technology when that technology manifests itself in so many different forms? De-duplication, like compression before it, can be incorporated into a number of different products types. While by no means a complete list, the major options for our purposes include backup software, NAS storage devices, and virtual tape libraries.

Even within these few categories, there are dramatic differences in how de-duplication is implemented, with each offering having its own benefits. The scorecard of feature trade-offs includes the following:

  • Source vs. target de-duplication
  • Inline vs. postprocessing
  • Global vs. local span
  • Single- vs. multiple-head processing
  • Indexing methodology
  • Level of granularity

As with any set of products, these trade-offs reflect optimization for specific design or market targets: high performance, low cost, enterprise, SMB, etc. For more detail on the range of de-duplication options and their implications, you may want to check out my colleague Curtis Preston's Backup Central blog.

Until recently, one aspect of de-duplication that was generally unquestioned was its focus: secondary data, particularly backup. However, there are growing signs that this too is changing. In theory, de-duplication can be applied anywhere there is a significant amount of data commonality — which is why backup is such a good fit.

However, if we look around for more examples of high data commonality, one area that comes to mind is virtualized server environments. Consider the number of nearly identical virtual C: drives in a VMware server cluster, for example. Recently, NetApp has been leading the way among storage vendors in suggesting de-duplication for primary storage in these environments. In fact, it has been steadily expanding its support of de-duplication, initially offering the technology on its secondary NearStore platforms, then on its primary FAS line, and as of last week on its V-Series NAS gateways, where it can de-duplicate the likes of EMC, HDS, HP and other storage.

Of course, for many, this is uncharted territory and the performance and management impact needs to be better understood. But given the higher costs of primary storage vs. secondary, the potential to achieve a 20:1 savings in storage, even for just a portion of the environment, is quite tempting.

Jim Damoulakis is chief technology officer of GlassHouse Technologies Inc., a leading provider of independent storage services. He can be reached at jimd@glasshouse.com.

Read more about Networked Storage in Computerworld's Networked Storage Topic Center.



Our Commenting Policies