Making storage easier for non-IT execs to understand

Mountains of corporate governance regulations, multiplying numbers of Web pages, and swathes of demand for keeping audio, video and still images in digital formats have made controlling storage a CIO-level issue over the last several years. However, the data flood has been good news as far as the giants in disk, tape and storage servers have been concerned, as they offer the water wings that allow companies to wade through the accumulating terabytes of information.

Few companies have ridden the data storage waves more successfully than Hitachi Data Systems (HDS), the Japanese giant that is positioning itself as the ultimate back-end storage repository and supplier of tools to more intelligently control what is threatening to be an overwhelming deluge of information.

As a storage veteran (as well as a veteran of the Vietnam war where he led a Marine Corps Platoon) HDS chief technology officer Hu Yoshida has some strong views on how firms can better understand the issues surrounding fast-growing data volumes and capitalize on the information they hold. However, despite the premise of open systems in other IT infrastructure and promises of interoperability from storage vendors, Yoshida warns that big decisions on storage models still have to be right from the outset. And, in spite of the perception that CIOs need not have done the hard yards in the data center to be successful these days, Yoshida says CIOs are getting better at understanding what's happening to the underlying bits and bytes.

"It's becoming more and more important which technologies and architectures you follow because that will lock you in for the next three to five years and that's a lifetime in technology," Yoshida says, in a London interview part way through a European customer tour in August.

"I find CIOs are becoming more technical. The good CIOs have to interpret the technologies into economic sense."

That theory ties in neatly with HDS's push to explain what it calls "storage economics." This is "explaining storage in economic terms that a finance person can understand," he says, and central to its premise is a dismissal of the idea that buying storage kit is a one-off capital expenditure transaction. Instead, buyers have to factor in a range of operating expenditure issues and potential traps.

"Capex you pay for once, but opex includes power, cooling and people (that are becoming more expensive) and the cost of migrating is often more than the cost of the hardware itself," he says.

HDS consultant David Merrill came up with the theory several years ago, based on economic texts such as Information Economics by Marilyn Parker and Robert Benson, of a model for thinking about storage as a holistic investment rather than purely by price-per-megabyte, the gauge that many firms were using at the time to measure value. Instead, storage economics proposes that return on investment, people costs (up to 40% of total cost of ownership, it is claimed), software, installation, training, write-off costs, potential for outage, waste, electricity and environmental factors should all be included and calibrated to better understand the underlying risk and reward of a storage purchasing decision.

Too often, firms rely on the ongoing decline of price-per-megabyte to bail them out, Yoshida says. "Many companies still bet on capex, which is absolutely wrong. They solve their problems by adding capacity. Budget in reality is going down today so buyers are very receptive regarding a storage economics decision."

SOS is the message

HDS is also playing to a broad management audience with its plans to make storage architectures more adaptive and flexible through software such as virtualization programs and de-duplication tools. Yoshida calls this "service-oriented storage," a play on service-oriented architecture, or SOA, the fashionable approach to server software that sees monolithic code being broken down into manageable components with associated metadata so that organizations are better able to replicate, change and instantiate processes.

"It's very similar to SOA in the server space where you have an XML layer and move apps and share modules like a billing module so the IT organization can align more closely with the organization as a whole," Yoshida says.

"That's happened with VMware on servers and everything is dynamic. When the business comes and says 'I want to run a campaign' and IT says 'You'll have it in three months,' that doesn't work because a marketing campaign is very time-sensitive. [By doing something similar to SOA for storage] we can become more agile and business-oriented. A service approach needs to be quantified in language the business understands."

But some would argue that the underlying problem with storage is that it hasn't followed the desktop/server road to standardization, with too many vendors developing proprietary hardware, software and firmware. Surely, the industry should have endorsed "the Dell effect" and storage should have become commoditized in the same way as PCs?

Yoshida insists that in storage fundamental differentiation has been necessary because of underlying complexity that makes it hard to build in support for the Nirvana of interoperability.

"You have to have that intelligence," he says. "The real problem [for seekers of interoperability] is there's a real difference in storage. It's not "storage is storage is storage." There is no platform for storage."

Yoshida suggests that if there is to be uniformity of approach it is likely to come from one vendor emerging as the dominant force -- and naturally he wants that vendor to be HDS. Sun and HP already resell HDS kit to their customers, he points out.

"They're all uniquely different [in their approaches to technology] but Sun and HP [sell HDS hardware] ... maybe IBM [one day] will use my technology."

Virtual differences

That differentiation in the development paths of storage suppliers has also had an effect on progress in virtualization. On the server side, the emergence of VMware and others has seen virtualization change the way data centers operate. So far, storage virtualization has advanced at a slower clip, however, despite the obvious attractions of pooling and better utilizing multiple systems.

"We had [companies like] FalconStor and Network Appliance do virtualization in the network, but to do that they had to remap the storage and couldn't enhance the performance," Yoshida says. "In most cases, they degraded the processing of the storage. [Virtualization and interoperability are] easier to do on the server because you're not worrying about persistence. Server processing is like water. You move it and it's gone.

"You've got higher-density blades and you couldn't put them all in a rack because of power and they were 10% utilized so you put three, or four or five servers onto one VMware server. In the storage world, it's more difficult because you have to move the data as well."

Despite most users having to rely on their storage hardware supplier, Yoshida contends that storage virtualization is already quite well advanced, aided by developments in deployment such as "thin provisioning" -- the ability to save admin time by automating the availability of storage through shared pools.

"We believe [storage virtualization] is already mainstream for large customers," he says. "The biggest waste is in the non-used space and the last job was to crack that open. And we've done that through thin-provisioning, also known as Hitachi Dynamic Provisioning."

HDS's goal is to provide the back-end server that is the ultimate data repository for customers, and it contends that it is already fulfilling that ambition. Firms can't just keep on adding data volumes, Yoshida argues, and they will need to switch attention from virtualization on the server to virtualization on the desktop.

"They have no choice. The volumes are so large that they have to consolidate. The next big wave is virtualizing the desktop with one gold image [on the server]. That's going to give more consolidation."

Many businesses today are moving away from having applications installed locally and toward a model where they are delivered over the Web as a service. In the late 1990s, storage service providers emerged to do the same thing for storage. Could the model be successful this time around even though it appeared to die a death back then? Maybe, Yoshida believes.

"The reason it died a death is that the whole idea of a service provider is to leverage your resource across [many] consumers, but people wanted their own [private environment] and [the storage service providers] didn't have the management software," Yoshida says, predicting that the model will make some kind of return, perhaps led by large "data depots" or cloud-based data-centers, even though "for the enterprise there are still concerns about privacy and audit trails."

"The private enterprises will continue [to store inside their own firewalls] but the data depots will have more leverage and cost of sale will be lower," he predicts.

As for HDS's role, Yoshida says that accepted wisdom about storage architectures might need tweaking if the world moves to the concept of the cloud, which will require huge data center resources and capacities that make even today's enormous storage vaults seem paltry by comparison. But HDS's focus will remain on storage, he says, rather than reaching out to be a player in servers like rivals IBM, HP or Sun.

"It may require some change, but that would be good because the rest of the world catches up," Yoshida says. "We're the only vertically oriented storage company left and we don't intend to stray from that. The next big market is in the storage archive [where HDS built momentum last year by acquiring Archivas]."

Yoshida sees no let-up in stimuli for storage. "Unstructured data is growing faster with e-mail coming over the Internet," he says. "There are sensors for oil exploration, and an Airbus [plane] has over a terabyte of data."

But he does see greater adoption of smarter systems to help firms cut back on wasteful storage processes.

The watchwords for Yoshida are "consolidation, utilization, elimination" and he sees de-duped bitstreams and attachments and other ways to separate the wheat from the chaff and get rid of "stale data."

Another push for HDS is assisting in the development of green data centers. This will involve addressing the electrical requirements and heat emissions of all elements of server-room infrastructure but will also serve ultimately to save on costs and help CIOs befriend their peers in facilities management, Yoshida argues.

However, relying on core component suppliers getting their respective houses in order will not be enough.

Data center designers will also have to address how best to make use of waste heat, sustainable power sources like solar power and other construction conundrums.

As for storage media, he is unconvinced by the claims of solid-state Flash-based memory to act as a replacement for server disk and tape although some firms are proffering it as an ultrafast alternative.

"Flash disk drives cost 20 or 30 times more than disk and we can do a lot of the same things with spinning media, for example, through wide-striping [spreading data over many disks]. There is some demand for Flash storage in handling financial information, but a lot can be done much faster in server memory."

Wide-striping and other technologies like HAMR (Heat-Assisted Magnetic Recording) should help the business keep on its current rate of progress "for the next 20 years," he reckons. With demand for data storage so intense right now, that could be just as well.

This story, "Making storage easier for non-IT execs to understand" was originally published by CIO.

Copyright © 2008 IDG Communications, Inc.

Shop Tech Products at Amazon