Why modern computing will kill traditional storage

"This company is dead. ... You know why? Fiber optics. New technologies. Obsolescence. We're dead alright. We're just not broke. And you know the surest way to go broke? Keep getting an increasing share of a shrinking market. Down the tubes. Slow but sure. You know, at one time there must've been dozens of companies making buggy whips. And I'll bet the last company around was the one that made the best goddamn buggy whip you ever saw. Now how would you have liked to have been a stockholder in that company?"

-Danny DeVito (as Lawrence Garfield) in Other People's Money (1991)

The proprietary, monolithic approach to storage is at odds with the future of computing, whether you approach things from the perspective of technology, architecture, deployment models or economics.

In my last post, I discussed one of the reasons driving a need for radical change in the storage industry -- the 60-plus percent annual growth in data creation and the incompatibility of that growth with a model that drives, at best, 15 percent annual reductions in the cost per unit of storage.

In this post, I'll address some of the architectural issues.

At the end of the day, IT can be boiled down to three things: computing, storage and networking. Major revolutions in IT (e.g. the move to client server, the ongoing move to cloud) require major changes in all three areas.

It should be no surprise that computing has changed dramatically over the past 10 years. Just as no one would invest in a buggywhip company, you would be hard-pressed to find anyone interested in investing in a company that manufactured big iron computing systems (mainframes, supercomputers, and the like).  This is obviously not because the need for reliable, powerful, centralized computing power has decreased. Rather, people have realized that computing became more robust, more reliable, more manageable and more economical as the following transformations occurred:

-proprietary software systems were replaced by open technologies like Linux and the rest of the LAMP stack;

-dedicated, single-use systems were replaced by virtualized architectures, which let multiple apps run on the same computer or let a single application run on multiple computers; and,

- large, monolithic, scale-up architectures were replaced by scale-out architectures, which let you build power by combining large numbers of redundant, small elements.


Indeed, my company's founder and CTO, Anand Babu Periasamy (far right), was part of this revolution as he built the world's second fastest supercomputer (Thunder) entirely out of cheap, x86 boxes. He retains his love of scale-out, but has-alas-lost the sweet mustache.

In other words, now computing is done by treating it as a scale-out, virtualized, commoditized and centrally-managed pool. An organization can own its own pool (a private cloud) or rent space in a pool someone else owns (a public cloud). In either case, the pool approach works and people are diving in.


Of course, if computing is going this way, storage and networking need to go this way as well. From an architectural standpoint, storage needs to support the new computing paradigm. It doesn't do you much good to move your applications around dynamically to take advantage of any spare CPU cycle if the application data is still locked inside an expensive, inflexible box. It's not surprising that many are now citing storage as the Achilles heel of true data center virtualization. The situation gets even worse when one considers the challenges of deploying hybrid clouds, when the ease of moving virtual machines between data centers runs smack against the challenges of moving terabytes and petabytes of application data economically and efficiently between disparate data centers over (relatively) low bandwidth connections.


Storage needs to do much more than just support the new computing paradigm. Inevitably, storage must begin to LOOK much more like computing: scale-out, open source, commoditized, virtualized and present in the cloud.

We've already seen the start of this transition. According to analyst firm ESG, shipments of scale-out storage systems (i.e. those built by letting users continuously add "small boxes") are going to surpass  shipments of scale-up systems.

However, the cloud movement will demand much more fundamental changes. Not only must storage be delivered in increments (i.e. scale-out), it also must be delivered in a way that untethers the fundamental storage functions from any particular hardware or from a particular vendor. You can't define what storage hardware will be available in the cloud. Instead, storage must be treated as a software problem -- with a software solution. 

Ben Golub is President and CEO at Gluster. He is on Twitter @golubbe.   

Copyright © 2011 IDG Communications, Inc.

7 inconvenient truths about the hybrid work trend
Shop Tech Products at Amazon