Data Centers

Determine your recovery priorities, then work backward to meet your goals. And don't trust your tape.

The Vatican spent 20 years and several million dollars restoring the Sistine Chapel frescoes, one delicate piece at a time. Done incorrectly, data restoration can be just as painstaking.

“The old method — creating full-tape backups, doing a full restoration of the system and then a full restoration of the data — is slow and painful,” says Dorian Cougias, CEO of Network Frontiers LLC, a consultancy in Oakland, Calif., and co-author of The Backup Book (Schager-Vartan Books, 2003).

Just look at some recent litigation. The costs for restoring 72 e-mail tapes in the 2005 Zubulake v. UBS Warburg employment discrimination case came to $166,955. Some of the key e-mails were missing from the tapes, leading to a $29 million jury verdict. In 2005, Morgan Stanley was hit with a $1.45 billion judgment after it overwrote some archived e-mails and had trouble processing backup tapes containing others.

But even if you aren’t dealing with billion-dollar lawsuits, the quality and speed of backup and restoration determines business continuity, employee productivity and business survival when faced with disasters such as a hurricane, blackout or server crash, or an employee accidentally deleting a file. And then there are regulatory requirements. Companies are taking data protection dictates seriously by adopting disk-based backup technologies at a fast pace. Research firm IDC in Framingham, Mass., says the worldwide market for disk-based data-protection hardware and software will hit $8 billion this year and reach $50 billion by 2010.

But protecting data takes more than buying a product. It remains very much a strategic decision. Here’s how to set up a data center with backup and restoration in mind.

Set Recovery Requirements

There are numerous technologies and thousands of products available for backing up data. But most experts say technology is the wrong place to start looking.

“The problem that most organizations have is they are not asking the right question,” says Cougias. “We back up data to protect the continuity of our systems and our data.”

The place to begin, he says, is with an assessment of which data and systems need to be protected and what the cost would be if they were to go down. The ideal, of course, is fully redundant systems so service is never lost. Next best is to quickly restore data when a problem occurs. However, not all data and systems require the same level of protection.

“You have to have a good alignment between what is the risk to the business vs. the cost,” says Robert Stevenson, managing director of TheInfoPro Inc. in New York. “Business units and the storage teams need to work out the recovery solutions that provide the most value based on what is at risk at the time of failure.”

Once priorities are set, work out in a backward sequence the steps needed to achieve that level of availability. Generally, you should look at two metrics: recovery time objective (RTO), which is how long a particular machine or service can be down, and recovery point objective (RPO), which describes how current the restored data must be and how often a backup must be done.

How often do you backup your data center?
How often do you backup your data center?
Source: Computerworld's Exclusive Survey of 287 IT Professionals, August 2006

“What people should do, but often don’t, is start with the recovery requirements,” says W. Curtis Preston, vice president of data protection services at GlassHouse Technologies Inc., a storage consultancy in Framingham, Mass. “Determine RPO and RTO, then you can figure out the backup to meet that.”

Cougias notes, however, that most organizations have trouble defining what their systems consist of. Take the case of SAP technologies. If a company backs up the server running SAP applications and databases and omits backup of the workstations and sub­applications tied to it, the SAP systems won’t work properly once restored.

“There are two types of backups — backing up data and backing up documents,” Cougias says. “Backing up files and database records is easy. Backing up data means backing up all those things that make the system work.”

This includes the system state on Windows machines, Active Directory or Open Directory privileges, and configuration files for a RAID array. So you must define which devices belong to which system and the exact configuration for each system (and where it is stored), and only then figure out how to protect it.

“Backing up data means I have to get this box or set of boxes talking to each other before I can ever launch a document or write a report,” says Cougias.

Further complicating the matter is the fact that restored data often runs on different hardware than what is used at the primary data center. Whether done locally or at a remote facility, restoration must be tested on the equipment that will actually be used.

“You should conduct tests of the restore/recovery process using people who have never seen the plan and don’t have a clue as to its contents,” advises John Weinhoeft, who recently retired from the state of Illinois central computer facility, where he was responsible for system design and a 16TB disk storage environment serving 100 agencies. “If they can do it, the plan is good.”

Trials of Tape

Tape storage has been around for half a century, and it’s the backbone of many backup strategies. But it’s far from ideal. Tape backup is slow. Finding the right files to restore is even slower. Frequently, files can’t be recovered at all.

“Tape is increasingly being exposed as a substandard medium for backups,” says Simon Robinson, a storage analyst at The 451 Group, a San Francisco-based consultancy. “Users like it because it’s cheap, but it’s inherently unreliable and performs poorly. Tape still has its place as a longer-term archive but is being superseded by disk.”

According to The 451 Group, tape storage costs about 12 cents per giga­byte compared with 50 cents for secondary disk storage and $2.50 for primary disks. That makes disk economically viable for routine backups, leaving tape for off-site archiving.

“Many IT organizations use disk as the backup target to shrink their backup window or improve access times for restore,” says Lauren Whitehouse, an analyst at Enterprise Strategy Group Inc. in Milford, Mass. “Some organizations leverage virtual tape library solutions, which emulate tape devices to the backup application, and others are implementing the disk backup options of their backup solution.”

Conventional disk-backup technologies include RAID configurations, traditional backup applications, replication, mirroring and taking snapshots every few hours. Disk-based snapshots provide more current data for restoration than nightly backups and are easier to restore than tape.

“We recommend using disk-to-disk backup using block-level snap vaulting for operational backup,” says Joe Shields, director of systems engineering and operations at LightEdge Solutions Inc., a managed network, voice services and hosting company in Des Moines. “This provides the short recovery time objective and frequent recovery point objectives and is cost-effective.”

Disks, though, are also being used as an intermediary step in tape backups (disk-to-disk-to-tape backups). Preston says the problem with tape is often a mismatch between network and tape speed.

“You are not always able to supply the tape drive with enough data, so it stops and starts when trying to keep up with the slow incoming data rate,” says Preston. “This can even cause the backup to fail.”

Writing to disk first and then to tape allows a better match-up in data rates. Toronto’s York University, for example, uses disks to speed database backups.

“We will be able to do backups much faster from the server standpoint and then cycle it to tape during the day,” says Ramon Kagan, the university’s manager of Unix services. “This will save a lot of time.”

Dedupe Your Data

Over time, tape will fade in the enterprise, except in long-term storage. The 451 Group says start-ups have invested $500 million in seed money in virtual tape libraries (VTL), data deduplication and continuous data protection (CDP).

“VTL solutions offer IT organizations a more efficient way to integrate disk-to-disk backup,” says Whitehouse.

Data deduplication eliminates the need to store and back up numerous copies of documents or the 10,000 copies of an e-mail the CEO sent to all employees. “We have commonly seen 20-to-1 capacity reduction using data deduplication,” says Whitehouse.

CDP, on the other hand, consists of any of a set of technologies that back up data as soon as it is written to the primary drive. The copy can be stored locally or at a disaster recovery facility. CDP can be harnessed to achieve the highest levels of RPO.

“The ideal solution is to use CDP to mirror data to a remote recovery site, combined with full backups on a regular basis to a different off-site location,” says Weinhoeft. “However, this is also the most expensive solution.”

While each of these technologies is available as a point product, look for them to be merged as vendors seek to create more complete backup offerings. “VTL vendors are adding deduplication, and vice versa, while there’s also consolidation taking place in the snapshot/ replication/CDP market,” says Robinson.

Search Out Bottlenecks

Whatever technology is deployed, experts emphasize that organizations must never lose sight of the overall purpose of backup — achieving continuous data and service availability. This necessitates an all-encompassing view. It isn’t just a matter of the source and target devices, but of the network as well.

For example, the use of server virtualization lets hardware run much nearer capacity during normal operations. Since backup is a resource-intensive operation, multiple virtual servers on the same box trying to use the same backup window can overwhelm the network interface card. Similarly, if live updates are made to a remote facility, you might want to use deduplication and a WAN acceleration appliance so the traffic can make it through the pipe. Also, there must be adequate bandwidth at a fail-over facility to accommodate not only the backups but also business traffic when the facility becomes the primary data center. This should be calculated and tested thoroughly.

“Make sure that you walk through the whole chain, from client to network to server to tape drive, to identify bottle­necks,” says Preston. “You may be surprised to find that the bottleneck is Gigabit Ethernet to the tape drive.”

Another factor is vendor mix. Even when using different technologies for different types of storage, try to reduce the number of products supported.

“If you can get the job done with one, it is better than getting the best of breed for Linux, Windows, etc., and winding up with three backup products,” says Preston.

These broad guidelines can help when developing a backup strategy for the data center. However, there is no cookie-cutter approach that will fit every company’s needs. And there is no substitute for communication with all involved to formulate a backup strategy that aligns closely with business needs, budgets and priorities.

“The data center manager needs to speak with stakeholders to gain a good understanding of the value of each set of data,” says Mike Karp, an analyst at Enterprise Management Associates in Boulder, Colo. “Once you understand the value, you can figure out the hardware and levels of service, how frequently to back up and the type of backup.”

See the complete Ultimate Backup Guide special report.

Robb is a Computerworld contributing writer.

Related:

Copyright © 2006 IDG Communications, Inc.

  
Shop Tech Products at Amazon