Is tiered storage obsolete? Yes and no!

Hard drive

Image: Barney Livingston (cc:by-sa)

Traditionally, discussions about enterprise storage economics revolved around the concept of a multi-tiered model. In reality, this often had more to do with vendor portfolio economics than customer requirements. But the emergence of flash has changed this forever. 

Most storage experts now talk in terms of a simple two-tier model, with a performance tier based on solid-state technology—and flash—and a capacity tier that uses good old electro-mechanical magnetic media—disk and tape.

However, when you add software-defined storage (SDS) to the mix, it may be time to move past the conventional concept of tiering based on performance.

Tier-1 storage is dead. Long live the performance tier

In the past, the top tier of storage was mainly defined by performance, with the old “frame-based” systems designated as Tier-1 and each subsequent storage tier characterized by progressively lower cost-per-TB, along with lower performance (aka “cheaper and deeper”).

Tier-1 storage typically meant engineering performance out of thousands of disks. Striping data across many spindles spreads the I/O load, and adds parallelism, thus increasing the total number of I/O operations per second (IOPS). Short-stroking the drives increased the IOPS still further, at the cost of “wasting” most of the drives’ capacity.

These solutions were incredibly expensive, but from an economic point of view, they were justified because the added performance could accelerate business results, speed scientific discoveries, etc. Those benefits were particularly strong for structured analytics, but today, flash technology has completely changed the economics.

What used to require racks of high-end drives, a million dollars, and an entire team of resident storage experts has been replaced by solutions that fit into in a 2U form factor, can be installed in less than an hour, and which improve the time it takes to get results from analytics by an order of magnitude—and at a fraction of the cost.

Thar’s gold in them thar capacity tiers

The “other tiers” used to be where you kept your less time-sensitive data, so they were generally that much slower and cheaper. However, from an economic point of view, it’s now possible to see all of that stored data as an overlooked resource: One which can be extracted and refined using new techniques.

It’s the digital equivalent of fracking. Companies that understand how to unlock the value buried in these data assets are the ones that are likely to be successful over the long term. But that success will require the ability to manage and extract value in a manner that scales with the data itself. Without a rich data management capability, all of that stored data may instead appear to be a cost burden, leading to a focus on cost reduction at the expense of business insights.

Is software defined storage the new tiering paradigm?

Compared with commodity storage, it is clear that the price premium for enterprise data storage is almost entirely based on the sophistication and reliability of the data management features that come with the arrays. In fact, one of the main themes in software defined storage is that data management, or the “control plane,” should be separated out from where the data resides—the “data plane.”

The aim is to reduce the acquisition costs of these enterprise storage systems. So, in some respects, software defined storage is about creating a new tiering model—one based primarily on data management. And storage industry pundits are already making the case that with the emergence of software defined (or application centric) data management, the age of the enterprise SAN may be coming to a close.

While the economic angle for SDS is interesting, when I listen to enterprise customers and service providers talk about software defined storage, what I mostly hear is a desire to eliminate storage-specific management from application workflows. At a recent industry event, I heard from application and virtual infrastructure admins who want to provision, manage and protect apps using terms they understand, without requiring intervention by storage professionals. They want to provision storage with the appropriate policies and service levels automatically at the time apps and VMs are provisioned.

Furthermore, they want to be able to change policies and service levels without requiring special knowledge of the underlying infrastructure implementation. Although storage cost is part of the conversation, it is not the primary issue driving interest in a software-defined approach. Operational efficiencies and the ability to quickly respond to business needs are typically more important and those requirements demand a storage infrastructure that can deliver flexible data management integrated with application and virtual infrastructure management tools.

All of this is hard to do, or at least increasingly hard to manage, with each application defining and managing its own storage pool, generally based on the evil LUN. Automation can be difficult when every application has different capabilities, a different way of doing things, and different APIs.

The logical conclusion? In order to achieve the benefits of software defined storage, you should select a primary data management system that covers the vast majority of your infrastructure, regardless of where and how that infrastructure is deployed.

Expectations for SLA-based tiering

However, I’m not saying you should put all your data management eggs into one basket. There’s a good argument for having more than one approach, within reason.

When the entirety of the data management can be done within an application, it is possible to forego the premium for a built-in enterprise data management system and use more economical storage systems with a limited feature set. However, the question should not be, “do I do infrastructure-based data management or application-based data management?” Rather, it should be, “which is most appropriate for my environment at this time?”

This is where conversations around “tiering” for data management will start to sound similar to those we have traditionally had around tiering for performance.

Who knows? It might not be too long before we start having automated data management tiering based on higher level SLAs. In fact, as we move from being custodians of infrastructure and towards being managers of service levels, this seems almost inevitable.

Join the discussion
Be the first to comment on this article. Our Commenting Policies