Skip the navigation

Vendor claims about storage virtualization flawed

Array-based virtualization is the best solution, and the next major step forward is thin provisioning

By Sandra Rossi
October 30, 2007 12:00 PM ET

Computerworld Australia - IT managers have been advised to be wary of vendor hype surrounding storage virtualization because it is a technology that is poorly defined, misunderstood and not widely used, according to Dr. Kevin McIsaac, an adviser at research firm Intelligent Business Research Services Pty. (IBRS).

Despite all the hype, McIsaac said that, over the next two years, network-based storage virtualization will remain a niche, while thin provisioning will enjoy rapid adoption in the enterprise.

And while McIsaac readily admits that server virtualization is one of the best IT infrastructure trends to emerge in many years, he said the situation is very different when it comes to storage virtualization.

"This idea of being able to layer virtualization over existing storage arrays is seriously flawed," he warned.

McIsaac said a reasonable definition of storage virtualization is "the abstraction of logical storage from physical storage." However, given the sweeping nature of this definition, it is not surprising that the technology creates confusion.

"The first step in understanding storage virtualization is to recognize that many of today's commonly used techniques and technologies are examples of virtualization, including a file system or a storage array," McIsaac said.

Rather than thinking of it as a specific new product or feature, McIsaac said virtualization should be thought of as a broad technique that can be deployed at any layer of the storage hardware and software stack to simplify the storage environment.

"Network-based virtualization, which involves using a device in the network to provide an abstraction layer over storage arrays, is usually what vendors mean when they refer to storage virtualization," he explained.

"The idea is to layer virtualization over existing arrays to create a single storage pool, simplify management and eliminate vendor lock-in," McIsaac said. "But this idea has significant flaws."

Organizations typically moved to an external storage array, either via a storage-area network (SAN) or network-attached storage (NAS), to achieve higher utilization by sharing the same pool of spare disk across multiple servers.

McIsaac said applying network-based storage virtualization to pool arrays isn't likely to improve this environment if efficient utilization hasn't already been achieved.

"Network-based storage virtualization results in a lowest common denominator view of the infrastructure, eliminating the value-added features of the array. This investment in the advanced features of the storage array could be lost making it a waste of money," he said.

"Also, the addition of the virtualization layer adds yet more complexity to the environment; it can introduce a performance bottleneck and add yet another potential source of failure," McIsaac said. "And while it may eliminate vendor lock-in at the storage array, it replaces it with lock-in at the virtualization layer."

Reprinted with permission from Computerworld Australia Story copyright 2012 Computerworld New Australia. All rights reserved.
Our Commenting Policies