Every IT person knows this painful truth: Attention from users on underlying IT operations is not always a good thing. This is especially true when an application or storage system has halted or slowed to a crawl because the infrastructure is not performing at an optimal level.
Raul Robledo, a storage specialist at the Trumbull, Conn., office of Affinion Group Inc., recently experienced this firsthand. Earlier this year, the global marketing company's 50TB storage-area network (SAN) began experiencing severe outages, because of what Robledo later found was bandwidth saturation on the SAN's interswitch links (ISL). A number of the company's external, Web-facing applications depended on the availability of data residing on that underlying SAN. "We had too much traffic going through the ports, and that caused applications to spawn additional processes that weren't getting a response back. This started a big chain reaction that began to take some of our [Web] applications down," he says.
Affinion's SAN is a dual-fabric environment consisting of three EMC Clariion storage arrays and a 3PARdata array connected via a Brocade 3800 and a Brocade 4100 Fibre Channel switches. To help diagnose and correct the ISLs' bandwidth-saturation problem, one of Affinion's SAN administrators used Orca, from open-source provider OrcaWare Technologies, to gather and plot data from the Brocade switches via Simple Network Management Protcol. (Orca, which helps plot arbitrary data from text files onto a Web server-based directory, is used by the group's Unix administrator to plot server performance.)
Although Orca proved important in this instance, Robledo says he realized he needs a tool specifically designed to keep the SAN performing optimally. Orca and other similar tools tend to require more manual work, knowledge and customization than products meant for real-time SAN-performance monitoring, he says.
Robledo began searching in earnest for a robust SAN performance monitoring tool that would let him address problems before they came to users' attention. After all, his team's ability to meet ongoing service-level agreements was at stake.
He turned to Onaro Inc., a storage service management vendor. For the past year, Robledo's team had been using Onaro's SANscreen Foundation software to monitor and report on storage operations. After the ISL outage, he decided to see whether Affinion could benefit from Onaro's recently released Application Insight 2.0. The software offers an application-to-array picture in real time of storage resource use and efficiency while providing application-centered monitoring and reporting about the performance of the storage infrastructure, according to Onaro.
Having conducted proof-of-concept testing of Application Insight, Robledo says he believes the tool would help to head off potential performance issues before they become a problem. "By using a combination of both products -- [SANscreen] Foundation and Application Insight -- we could be alerted in real time of any performance spikes and hopefully be informed of any issues that could cause an outage before someone calls from the business line," he says. "We wouldn't need to get inquiries or notification from individuals. We would be getting those right from a product that's monitoring our environment."
Because Insight also shows port use, his team would be able to provision storage more effectively, Robledo says. The team would be able to configure the hosts to send or receive data through specific switches and storage ports. "This would let us define a host with certain storage buckets and assign which applications those belong to. So, when we look at performance, we could then see which applications are on which switches, including the storage that is on the specific arrays," he says. "We could then see a pattern of which applications or hosts are resource-intensive from a storage perspective, and maybe start to utilize storage for that application on another array."
Affinion is not alone in considering real-time storage management products to help optimize SAN performance, says Mike Karp, a senior analyst at Enterprise Management Associates (EMA). Software that performs application-centric data management and root-cause analysis, or manages storage in the context of networks and the other systems around it is gaining in popularity, he says. In addition to the Onaro tools, the category of storage optimization includes EMC Corp.'s Smarts, MonoSphere Inc.'s Storage Horizon, Hewlett-Packard Co.'s Storage Essentials and Hitachi Data Systems Corp.'s HiCommand suite, Karp and other analysts say.
Optimizing storage in a virtual world
Other technology options are available, and any self-respecting storage hardware or data management software vendor today says its offerings help optimize storage resources. Many undoubtedly do. But two other technologies -- storage virtualization and archiving -- are generating the most user interest, experts say.
Storage virtualization, a new data center staple, is much touted for its ability to combine disparate physical storage systems (often from different vendors) into one logical pool whose collective resources can be managed and provisioned more easily. The technology is coming of age, as many enterprises get close to their SANs' three-year end of life, says Josh Howard, a storage specialist at reseller CDW. "As you look toward moving into the next [storage] frame, you have data migration issues where you may have to look at a data migration, professional services engagement, or schedule a lot of downtime to move that data into the new frame," Howard says.
Storage virtualization products -- such as FalconStor Software Inc.'s IPStor, IBM's SAN Volume Controller and the HDS TagmaStore Universal Storage Platform -- address some of this pain by performing much of the data migration in the background, Howard says. They even can help organizations reuse some of their now end-of-life storage systems by relegating them to a new role as lower-level storage tiers or backup targets for snapshot-type data sets.
From the perspective of storage optimization, the virtualization argument becomes one of flexibility and greater utilization, Howard says. "Virtualization enables flexibility, including across different brands of storage. It gives you the ability to buy truly cheap disks, not the big vendors' version of cheap disks," he says, citing products from companies such as Nexsan Technologies Inc. "That's inexpensive and can work as your backup target, while your production system remains an [EMC] Symmetrix or [HDS] USP," he says.
Likewise, from a storage-utilization perspective, Howard says organizations with multiple storage frames from various vendors will probably see use rates jump from 40% of available disk space to as much as 70% to 80% as a result of implementing virtualization.
For archiving, Howard and EMA's Karp cite as examples CA Inc.'s iLumin, EMC's Email Xtender and Disk Xtender, Symantec Corp.'s Enterprise Vault and Zantaz Inc.'s Enterprise Archive Solution. These archival applications help translate policies into computer-driven rules that automate the movement of data from high-performance production disk arrays to lower-level storage tiers.
Policy-based management tools not only help automate the environment but also capture what Karp terms "senior staff intelligence" and best practices. These are translated into policies that empower junior employees to perform many tasks that previously had been the domain of more experienced colleagues.
Data deduplication technology, which reduces the amount of redundant data and is one of the biggest draws in the data backup market, is an optimization favorite, Howard says. He has heard of organizations using data-deduplication software from vendors such as Data Domain Inc., Diligent Technologies Corp., ExaGrid Systems Inc., FalconStor Software Inc. and Quantum Corp. that have been able to remove duplicate data and compress remaining backup sets enough to store the equivalent of 20TB to 30TB of backup data on a 1TB disk.
Optimizing storage on multiple fronts
Optimization often takes a combination of technologies. That's the case at Baylor College of Medicine in Houston. The school's IT team, which recently implemented a disk-to-disk backup product from Network Appliance Inc. along with HDS storage-virtualization technology, knows something about the role technology can play in making things run better.
Baylor had been relying on tape for weekly backups of approximately 40TB of file data and another 5TB of application data. Those backups had begun to take increasingly longer to finish. "We'd spend countless hours backing up just the [storage] volumes," says Michael Layton, director of enterprise services and information systems at the college.
That was before Layton and Vo Tran, manager of enterprise servers and storage, began using NetApp disk storage systems and SnapVault software to replicate NetApp Snapshot data copies to separate, secondary storage. For primary storage, Baylor uses a NetApp FAS980C two-node cluster. The secondary SnapVault backup target is a NetApp FAS6070 storage system, Tran says.
In moving from tape to disk via SnapVault, the college has shortened backup and restoration time to one-tenth of what it was before, Layton says. Plus, his team is on track to make it possible for the college's internal users to recover lost files on their own, he says. The self-service recovery server will appear as another "recovery-oriented" file share to users, with a file directory structure similar to that of their primary file share. If they inadvertently delete a file, or if a file becomes corrupted, they will be able to point to the recovery server, where they can locate and copy over the original file easily. That's significant from a self-healing perspective, Layton says.
On another front, given the college's recent acquisition of the HDS TagmaStore USP for storage virtualization, Layton and Tran are looking forward to providing customers storage capacity on demand while reducing to one SAN management interface, down from eight. More important, Layton expects this move will let his dedicated storage personnel manage twice the amount of storage with no additional head count -- going from what amounts to 90TB of Fibre Channel and Serial Advanced Technology Attachment-based network storage to as much as 170TB of storage over the next few years.
No stranger to virtualization, Layton and Tran also took advantage of the NetApp V-Series V980C virtualization system earlier in its network-attached storage (NAS) consolidation efforts to help ease the pain of migrating files to the new NetApp-based NAS systems and gateways also backed by HDS SAN storage.
With optimization, storage managers get more done in shorter windows, no longer fret over backups and recoverability, and don't get caught up with "putting out fires," Layton says. "They can actually do something more proactive to manage our storage environment."
How to save millions (vs. thousands) through optimization
While it's undoubtedly invaluable, technology often offers only part of the solution to storage optimization. "If you don't know how to drive and you're driving a broken car, buying a new car will not fix your problem," says Ashish Nadkani, principal consultant at GlassHouse Technologies Inc., an enterprise storage consulting firm.
Although many corporations have undertaken storage-tiering and data-classification initiatives, pinpointing exactly how much money they've saved as a result is a difficult challenge, Nadkani says. Cost-cutting efforts can be hurt when a storage array or type of RAID is not matched optimally to the application, he says.
Mark Diamond, CEO at storage consulting firm Contoural Inc., puts the issue another way. This isn't about buying new stuff to optimize your storage, he says. Instead, it's about determining whether the data you've created is stored in the right place. This discussion goes beyond the basic concept of using inexpensive disk to store data and delves into how the disk is configured, especially when it comes to replication and mirroring.
"We typically see that 60% of the data is overprotected and overspent, while 10% of the data is underprotected -- and therefore not in compliance with SLAs [service-level agreements]," Diamond says. "Often, we can dramatically change the cost structure of how customers store data and their SLAs, using the same disk but just configuring it differently for each class of data."
One case in point is a recent analysis that Contoural performed for a large manufacturer that used three storage tiers. After assessing the different types of data and their needs for replication, the Contoural team recommended a more detailed, six-tiered storage environment. The company's estimated savings are pegged at more than $8 million over the next three years. This includes the ability to defer further Tier 1 storage hardware acquisitions for as long as two years.
Optimization technologies, such as virtualization and deduplication, are excellent and can probably save an organization thousands of dollars, Diamond says. But if you take the bigger picture of optimizing, not just storage but the data residing on it, "you can save millions," he says.
Hope is a freelance writer who covers IT issues surrounding enterprise storage, networking and security. She can be reached at mhope@thestoragewriter.com.
This story, "The best tweaks for greater storage performance" was originally published by Network World.