Backups as a source for data mining

Hey, backup guys. How would you like to finally get some respect and become one of the go-to data management experts for your company?

If you’re a backup person and value your job, you most likely back up all of the most important data for your company. If you have recently implemented dedupe into your backup environment, I think there may be an opportunity to leverage those backups for some useful information. When you combine disk-based backup with data deduplication, the result is a single instance of all the valuable data in the organization. I can’t think of a better, more complete data source for mining.

With the right tools, the backup management team could analyze all kinds of useful information for the benefit of the organization, and the business value would be compelling since the data is already there, and the storage has already been purchased.  The recent move away from tape backup to disk-based deduplication solutions for backup makes all this possible.

Think about it. The consolidated backup of all the data includes not only all un-structured file system data, but also the structured database data from every backed-up platform. The good news here is that the deduplication repository includes not only a single instance of all the data, but also an index of what is being stored and how many copies are being backed up. (This information can be easily gathered just by counting the links to the objects in the repository).

IT administrators can use the number of links to an object to find out how many copies of that object are out there in the wild. Then, he can use the metadata about the object itself to search for other information that would normally be too difficult to find if he had to search across all the systems, files, databases and other applications located across the organization, not to mention any remote locations.

Data mining may soon become a standard feature of backup

I believe the next big thing in backup will be a business use case to mine the data being stored for useful information. It’s a shame all that data is just sitting there wasted unless a restore is required. It should be leveraged for other, more important things. For example, can you tell me how many instances of any single file is being stored across your organization? Probably not, but if it’s being backed up to a single-instance repository, the repository stores a single copy of that file object, and the index in the repository has the links and metadata about where the file came from and how many redundant copies exist.

By simply providing a search function into the repository, you would instantly be able to find out how many duplicate copies exist for every file you are backing up, and where they are coming from. Knowing this information would give you a good idea of where to go to delete stale or useless data. After all, the best way to solve the data sprawl issue in the first place is to delete any data that is either duplicate or not otherwise needed or valuable. Knowing what data is a good candidate to delete has always been the problem.

Being able to visualize the data from the backups would provide some unique insights. As an example, using the free WinDirStat tool, I noticed I am backing up multiple copies of my archived Outlook file, which in my case is more than 14GB in size. If you have an organization of hundreds or thousands of people similar to me, that adds up fast.

Are you absolutely sure you are not storing and backing up anyone’s MP3 files? How about system backups? Do any of your backups contain unneeded swap files? How about stale log dumps from the database administrator (DBA) community? What about any useless TempDB data from the Oracle guys? Are you spending money on other solutions to find this information? Are you purchasing expensive tools for email compliance or audits? The backup data could become a useful source for data mining, compliance and data archiving, and can also bring efficiency into data storage and data movement across the entire organization.

Additional benefits for using backup for big data analysis

Another good use case would be using snapshots of logical unit numbers (LUNS) for the backup source. The space-optimized LUN snapshots could be mounted as recent data sources for testing and development, or perhaps as extract and load sources for other big data analytics engines.

If you are already backing up to disk, the good news is you already own the storage. You should leverage that less expensive storage tier for new uses that just might provide a competitive edge, or at minimum reduce your IT costs. Backup professionals might finally get their day in the sun as not only IT protection specialists, but also as a new, useful data management team. IT changes fast, which is why I love it. I believe some new and exciting things will be happening in this space in the next few years, so keep your eyes open.

This article is published as part of the IDG Contributor Network. Want to Join?

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies