The software-defined data center is one of the technology trends that’s kicking the industry into its next evolutionary stage. It's rife with innovation and has the potential to dramatically increase IT’s agility when delivering services to operations, development teams and application owners.
This, my first post here at the Clouds, Flash and Big Data blog, is an introduction to the software-defined data center in general, and in particular how it relates to storage resources.
What is it?
The software-defined data center architectural model seeks to define data center resources in software—specifically compute, network, storage and security—in order to decouple them from their hardware boundaries and open up new levels of service agility. You could consider it as the evolution from server virtualization to the complete virtualization of the data center.
At the core of the software-defined data center is storage, or more specifically, software-defined storage (SDS). Unfortunately, the software-defined model lacks an industry standard definition.
- It seems like some vendors view SDS as a means to pool storage hardware resources and enable them to be programmatically defined in software.
- Other vendors position SDS as a technology that allows x86 servers with direct attached storage (DAS) to be configured as storage arrays, based on various shared-nothing architectures.
From my perspective, the latter is simply a new storage platform option; one enabled by software. Programmatically-defined pooling is one necessary key to SDS.
SDS should be viewed as a means to deliver IT services and is hardware agnostic. This definition provides the greatest benefit to IT departments and addresses the agility problem that exists in almost every data center.
The ability of SDS to publish storage service catalogs and allow resources to be provisioned on-demand and consumed based on policy is light years ahead of what is broadly available today in storage-area network (SAN) and network-attached storage (NAS) arrays, distributed DAS architectures or cloud service providers.
Here’s what I think are the three key properties of SDS:
1. Virtualized, policy-based storage services
For software-defined storage to truly be dynamic, you have to abstract data access and data services from pooled hardware resources. By liberating data access from hardware, you gain the ability to provide storage based on service levels, instead of hardware attributes.
Such policy-based storage services should include:
- Elastic performance and capacity
- Multi-tenant service separation
- SAN and NAS access
- Storage efficiencies
- Integrated data protection
2. Broad scope of storage options
When it comes to storage, one size doesn’t fit all deployments.
Do I need more performance or capacity? By standardizing on storage capabilities in software you can view hardware as merely a design choice and the means to scale capacity or performance.
An SDS offering should support the broadest range of storage platforms, including:
- Purpose-built and optimized SAN / NAS arrays
- Storage gateways that virtualize existing storage arrays
- Commodity server disk, via virtual storage appliances and distributed DAS models
- Cloud and hybrid-cloud based storage services
3. Application self-service and open APIs
A software-defined data center empowers development teams and application administrators by accelerating IT service delivery.
From an SDS perspective this means delivering storage services through integrations in the native application administrative interfaces and providing programmable APIs for custom applications and workflow automation.
I’m excited to see the emergence of software-defined storage in support of software-defined data center initiatives. The broad market is beginning to recognize the value of decoupling storage services from storage devices and focusing on rich integrated, IT service delivery.
Admittedly, I’m biased, but I maintain that SDS is a major technical innovation. It will advance the delivery of IT resources on demand, and its effects will be directly felt by operations, development teams, and application owners.