Storage virtualization isn't new. It has been done for decades on mainframes, and almost every storage vendor claims to offer virtualization across at least some of its products. By creating a single view of multiple storage devices, virtualization can simplify and thus lower the cost of storage management. It can also reduce the number of new arrays a company buys by combining data from multiple servers or applications into a shared pool of storage. That provides an alternative to buying more storage for one overtaxed server while disk space sits empty on the server beside it.
But storage managers need to remember that not all virtualization is created equal. In many cases, a vendor's virtualization offering works only (or works best) on its own hardware, while most organizations own storage hardware from many vendors. Some virtualization products work only on file-level devices, which store and retrieve information as files, while others work on the level of blocks (the smallest form in which data can be stored and retrieved on storage devices).
Some vendors tout the benefits of doing virtualization on the server, while a growing number claim it should be done on the "fabric" that links storage devices. But such technical arguments "typically focus on details where one vendor can differentiate himself from another," says Randy Kerns, a partner at Evaluator Group Inc. in Greenwood Village, Colo. He suggests storage customers develop a strategy around their near- and long-term business needs.
Users shouldn't look at virtualization as a product or a feature in its own right, but as an enabling technology to solve business problems, says Steve Kenniston, a technology analyst at Enterprise Storage Group Inc. in Milford, Mass.
For the fastest possible data backup, he says, a company might choose to perform virtualization on a dedicated server such as a network-attached storage appliance that's optimized for serving up logical storage volumes. If a company wants to flexibly move data among, say, servers running different operating systems, it might instead opt for fabric-based virtualization in which switches linking the storage devices have the intelligence to reformat data as needed.
Similarly, a company building a new storage infrastructure has the luxury of choosing switches and software that support fabric-based virtualization from the start, Kenniston says, whereas one that has invested in expensive storage-area network (SAN) switches might opt for lower-cost, if somewhat slower, virtualization software running on a host.
Here's how three customers focused on the business problems and, as a result, are seeing the benefits of virtualization today.
Unused Terabytes
Customer: Philadelphia Stock Exchange Inc.
Problem: Unable to reallocate storage without unacceptable application downtime
Technology: Foundation Suite, Volume Manager from Veritas Software Corp.
Virtualization has solved one big problem for Tony Catone, director of the systems architecture group at the Philadelphia Stock Exchange. But he has two more challenges he's hoping virtualization vendors will tackle, and soon.
The problem virtualization has eliminated is underuse of the terabytes of storage that were direct-attached to the stock exchange's application servers two years ago. It would have taken "days of planning and hours of downtime" to reallocate storage among critical servers, Catone says, so the exchange simply added more storage to each server as needed. That kept vital applications running but was inefficient because the exchange was buying new storage for some servers while storage sat unused on others.
By moving to Brocade switches, EMC Symmetrix systems and Hitachi Data Systems SANs, Catone has increased storage utilization from 50% to 75% and saved $500,000 by reassigning unused storage among applications rather than buying new disks. "We just reallocated 2TB of storage the other week," he says. "It took all of an hour to plan and 15 minutes to execute."
The SANs now provide storage for the stock exchange's Tier 1 transactional applications, as well as Tier 2 applications such as decision support. Tier 3 consists of archival data stored on tape. Next year, Catone says he plans to move the Tier 3 data to SCSI or Advanced Technology Attachment-based disk drives to provide relatively low-cost, but rapid, retrieval of archived data, since "there are times when we want to be able to [recover] data within 15 minutes or a half hour from five years ago."
The second capability Catone wants from virtualization is automated migration of 12TB of data among different storage systems based on preset criteria such as the age of the data, the capacity of the disks on which it's stored, or file or data type. He would like to see virtualization-based storage management tools that could perform this function instead of highly paid application developers and database administrators whose time could be better spent developing revenue-enhancing applications.
Automated, policy-driven migration would require virtualization to solve Catone's third problem: sharing storage among servers running different operating systems. He hopes that virtualization performed on the fabric of switches and other hardware in storage networks will eventually mask the differences between files used by different operating systems.
Improved Storage Prescription
Customer: Denver Health Medical Center
Problem: Underutilized direct-attached storage; need for seamless disaster recovery
Technology: IP SAN from LeftHand Networks Inc.
Jeff Pelot has already seen the reality of virtualization, when he watched four Network Storage Modules (NSM) from LeftHand Networks come up as he installed an IP SAN at the Denver Health Medical Center.
"We plugged the things in, turned them on; they came up and recognized each other as a contiguous storage device, even though they were physically separated by a couple of buildings," recalls Pelot, the health care provider's chief technology officer.
The 3,700-employee hospital currently has a split environment of Fibre Channel SANs in the form of two Clariion products from Hopkinton, Mass.-based EMC Corp. They were purchased in 2001 and 2002 to escape the cycle of buying more disks whenever one of the medical center's 97 servers failed and to keep up with a 50%-per-year growth in storage demand. Pelot put 3TB of data from critical patient-care systems, as well as e-mail and other departmental applications, on the Clariions but kept a beta-testing relationship with Boulder-based LeftHand.
Although the Clariions provided more efficient provisioning and improved data protection and recovery compared with direct-attached storage, Pelot hoped IP SANs could provide similar performance at a lower price, as well as simplified management through virtualization. In early 2002, he became LeftHand's first customer, buying two NSMs for the hospital data center and two more for a network wiring closet.
"When I look at my EMC SAN, I have two frames, and they mirror each other completely. That doubles the cost to manage whatever storage I have," says Pelot. In contrast, the single console interface LeftHand provides is "very, very intuitive" and allows a single administrator to manage the EMC as well as the LeftHand environments, he says.
When the hospital provides him with more space in a new building, Pelot will use snapshot, remote copy and asynchronous replication to duplicate data between the NSMs in his current data center and the new, more secure location. "I don't have to duplicate my environment to still maintain high availability and disaster readiness," he says.
Pelot now has 5TB of raw capacity on the Clariions and 7TB on the IP SAN. In the future, he also expects to see clinical data "going to the IP SAN because it's more affordable and it's proving itself."
Application Acceleration
Customer: Wasatch Advisors Inc.
Problem: Underused direct-attached storage; poor application response time; need for cost-effective remote backup
Technology: SANsymphony from DataCore Software Corp.
Virtualization hasn't reached its ultimate goal of automated, policy-driven data migration across storage devices from any vendor. But it was good enough to pay for itself within nine months for Wasatch Advisors. The Salt Lake City-based mutual fund firm was running approximately 500GB of direct-attached storage on its approximately 25 servers when it began looking for an alternative storage strategy in mid-2002, says CIO Dwight Ricks. With disk utilization at only 27%, Wasatch was buying much more disk than it needed. In addition, new compliance-checking software was slowing response time, as was the process of mirroring individual servers to an off-site location one by one.
In November 2002, Ricks purchased a Dell Inc. PowerVault 660F configured for RAID 10 mirroring with about 1.5TB of capacity. He chose DataCore's SANsymphony storage management software because of its performance and support of servers and storage from multiple vendors. Ricks is now using Fort Lauderdale, Fla.-based DataCore's asynchronous IP mirroring on-site, and by the end of the first quarter, he hopes to also be using it to mirror data off-site.
By placing storage on the Fibre Channel SAN, Ricks has reduced the performance hit from the compliance-checking application and improved response time for traders by 50%. Add that to the savings from making more efficient use of his disk space, and Ricks figures he made back his investment within nine months.
And by using SANsymphony's Dynamic Network Managed Volumes, he says he can set up and assign storage volumes "during production hours instead of having to come in on a weekend or late at night." To Ricks, virtualization means "I can do my job when I need to do my job, and it doesn't have any impact on the servers."
THE BASICS | |
|