Consolidation Is Key

Computerworld's special report on storage covers the topics of cost-cutting strategies, disaster recovery, security and managing tape backups. So we asked Bill Peldzus, senior storage architect in the professional services division at Imation Corp. in Oakdale, Minn., to discuss each of those topics. The professional services division provides independent consulting and testing services on storage networks.

Are there some cost-cutting strategies that corporate IT managers should consider? Over the past year, virtually all of our storage projects are directly mapped to some ROI and budget cuts. Our customers have to do more with less, or more with the same.

One trend is consolidation. You can use storage-area networking technologies such as Fibre Channel to start managing your storage as a virtual pool, as opposed to individual direct-attached storage for each one of your application servers. Instead of trying to manage seven distributed sites, it's much more cost-effective to manage one site that still has those seven applications. Analysts say that the cost of managing storage is seven to 10 times higher than the cost of actually purchasing the storage itself.

The consolidated SAN also makes it possible for the customer to add applications without needing additional resources to manage that.

Another benefit is the aggregation of storage. With seven distributed sites, you'd want an additional 10% of capacity at each site, just in case the application needs it -- a buffer. With a consolidated infrastructure, you don't need 10% plus 10% plus 10%; you can probably have just a 20% buffer on the whole pool, and distribute it to any of my seven applications that needs it. That's attractive from an ROI standpoint.

A third benefit is flexibility and scalability. You can easily add additional storage to new or expanding applications. You can plug 100GB into my SAN and allocate it to whatever application needs it.

Are there any smaller tactics, or "cheap tricks," that people can use to save money? There's tape sharing for backup and restore. In a distributed situation where you have a tape drive integrated with each server, you might be using that tape drive one or two hours a night to do backup, and then the other 22 hours it might be sitting idle. With a SAN, you can effectively share tape resources amongst all of the servers in the SAN. You can have one tape library, with fewer tape drives, and say this tape drive is going to be used for server No. 1 and when it's done backing up server No. 1 it can be used to back up server No. 2, and then you can go on to No. 3. Now that tape server can be used eight or 10 hours a night, backing up more servers. Now you're using that tape drive more efficiently. And you're not buying a new tape drive with every new application server. With a SAN, you have the opportunity to put in a serverless, or LAN-free, backup system that allows you to share your tape drive.

You can tell your executives that you can add 100 more Exchange users and add gigabytes of storage, but you can accommodate backing up that data without any new backup hardware or software. That's definitely a good thing in this economic environment.

Are you seeing more interest in long-distance data replication for purposes of disaster recovery? Probably 50% of my recent projects have been focused on disaster recovery. We help customers figure out how big of a wide-area network pipe they need and what's the best strategy for replicating from one site to another.

One of the approaches is asynchronous storage replication. At my primary site, if I'm writing to my disk, it will come back to the application and say the write is complete and the application will continue to perform. Behind the scenes, the SAN is replicating it to the other site, without telling the application it has to wait. One of the biggest benefits is that the application isn't aware and doesn't care how you're getting data from one place to the other. If a disaster happens, you have a copy of your application software at your disaster-recovery site, and it will be pointing to the data that you were replicating over the wide-area network.

The hidden challenge is that you have to be able to get your user base [connected] to the new application at a different site. So there are some networking challenges; you have to be able to reroute your customer to that new server.

There's another approach, called synchronous, in which you write data to both places and the application has to wait until that write is confirmed at both places. But over a very long distance you don't have the speed to do that, or the cost is prohibitive, so we don't see a lot of synchronous replication over wide areas.

A third approach is a blended approach, where the customer understands that they have mission-critical and less-critical applications. You can ship tape off-site for your less-critical applications -- if they can be down for two or three days -- so you don't have to pay for the bigger pipe to replicate that data online. You also don't have to pay for the additional online storage at your disaster-recovery site. You're only sending the mission-critical data through the big pipe, and it's more cost-effective that way.

What are the security concerns in storage networking? First, take a half-step back and look at what your overall security infrastructure is today, not just storage. Is your network secure? You may lock one door, but the whole barn is open. We can help you secure your storage network, but have you secured your whole network infrastructure?

In terms of architectures, a Fibre Channel SAN has an inherent amount of security just because it's a physically separate network behind your servers, and people know how to secure their application servers.

On the other hand, with IP storage, you're using your existing LAN for storage, so you don't have as much control over who is on the LAN and what they're doing. You already have your entire user base accessing applications over the LAN, so you have to take security much more seriously -- any laptop could be sniffing packets.

If you're putting your storage on a wide-area network for disaster recovery, you're basically trusting the telcos to give you a secure connection from one site to another.

Storage security is in its very early stages, and vendors have some problems to solve. Any kind of security that involves encryption will have latency at each end. Storage is performance-intensive -- are you willing to take a 20% to 40% performance hit to secure your storage, if all of a sudden your applications come screeching to a halt?

Special Report

Cheap & Secure Data Stores

Stories in this report:



Copyright © 2002 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon