A major shift is underway in data center infrastructure procurement and design. Many public and private entities have shifted from making large hardware and software purchases in the form of components of servers, storage and applications to instead buying pre-packaged bundles, building blocks and/or PODS. This trend is part of a more modular data center design approach based on vendor tested and certified reference architectures. These modular building blocks are typically all inclusive of the hardware and software components and customizations required to run the applications, and they form the foundation of consolidated private clouds.
The reference architecture will include hardware and software components which have been selected, tested and tuned by the vendors instead of by the IT staff. This is a good thing for frazzled IT folks who have their hands full just keeping legacy applications running. Since the hard work of designing solutions and implementing and tuning components has already been done, IT teams can focus on providing better services rather than getting sucked in to more mundane tasks such as assuring a particular application server is implemented properly. Since the vendor has already tuned the solution, the IT team no longer needs to worry that the application infrastructure has the correct performance and reliability built into it.
Examples of these modular building blocks are all over the place. Consider Oracle's Exadata solution, EMC's VBlocks, HP CloudSystem, HDS VSP, IBM's CCMP RA, and PODS from NetApp and FalconStor. Most of these solutions cater to some or all of the requirements for the four elements of what IT does:
I use the term "data services engine" to describe what these reference architecture bundles are all about - providing enterprise-level data services for the applications they host. The graphic below shows a sample software stack for a comprehensive reference architecture that provides all the data services required for most applications. Notice that the stack also includes all of the recent technical innovations to optimize data protection and replication such as dedupe, continuous protection, snapshots, delta versioning, etc, which should have the added benefit of reducing costs for backup and business continuity for applications. Security is built in via encryption in flight and at rest so it can be leveraged as part of a cloud based building block, and the included virtualization assures all aspects of the solution can be implemented on any hardware platform.
A reference architecture can have a dramatic impact on IT efficiency, and it will simplify building out data center infrastructure. Be careful though, as it's easy to get locked in to a particular vendor solution which could limit your choices and increase costs in the long run. Be sure that the solution is open enough so you can complement the architecture by introducing competing and perhaps more cost-effective components in the future.
You should be able to roll your own solution by including a la cart components already used and certified inside a vendor's reference architecture, and combine those with another vendor's solution. An example would be using Cisco UCS blades together with HDS HUS virtualized storage and adding VMware virtualization for the server side. You might like the manageability of the Cisco servers but also want the virtual storage services and reliability provided by the HDS storage.
Some organizations are going one step further and simply outsourcing their infrastructure to the cloud, where a third party will build, run and manage their data centers. This may work for some, but be careful again not to get locked in. Once you lose control over the design of the data center, you can also lose control over the costs unless terms are baked into the contract.
There is a shift in data center design strategies taking place. CIOs are asking their IT managers to stop building infrastructure from scratch and instead rely on reference architectures that will form the basis of the corporate private cloud. IT will move from procuring components to procuring appliances, bundles and PODS that are pre-tuned to accomplish specific tasks. These modular building blocks - or data services engines -- can vastly simplify datacenter design and have a huge impact on IT's bottom line.
Christopher Poelker is the author of Storage Area Networks for Dummies, and has over 30 years of experience architecting storage, backup and disaster recovery solutions. Chris specializes in storage virtualization and data protection, and is currently the vice president of Enterprise Solutions at FalconStor Software.
This article is published as part of the IDG Contributor Network. Want to Join?