Five questions to ask about data center optimization

In a down market, many organizations look to reduce costs. One tried and true method in cost reduction is to review existing IT operational procedures in order to determine where adding efficiencies may reduce operational budget requirements. Data center administrators have typically embraced the Information Technology Information Library (ITIL) standards model to implement solutions, but the operational benefits of that traditional methodology sometimes fails to meet the more stringent budgetary objectives of a down economy. For many, it now makes sense to finally optimize IT by leveraging recent advancements in server and storage technologies such as virtualization, data de-duplication, continuous data protection, WAN optimization, and thin provisioning. 

Why optimize? And how?

When we talk about optimizing the data center, we're really discussing a paradigm shift to a more cost-effective approach to IT data services. One aspect of that shift is the application of a physical abstraction in order to provide more efficient movement and storage of data. Once virtualization is introduced and servers become abstracted from storage, it becomes much easier to create policies which enforce more specific service levels for applications residing on pooled storage resources. Physical abstraction through storage virtualization simplifies the grouping of data elements for consistency and recovery purposes. The resulting infrastructure enables IT managers to break free from dealing with the complex physical constraints of traditional storage networks and server farms. 

Virtual abstraction also introduces complete data mobility, which eases the process of technology refresh and information lifecycle management. Data can be moved at any time while operations continue between any local storage hardware, or even between data centers using unlike storage. The same benefits you enjoy from virtualizing servers now become possible with your storage, which in the end makes both servers and storage a commodity which can be purchased from any vendor at the lowest price. If all the intelligence for data services resides in the network, all those expensive storage array licenses are no longer required. When virtualization is combined with data deduplication and continuous protection, you end up with a very powerful solution for cost reduction. Data deduplication makes data movement and storage more efficient, and continuous protection eliminates the backup process, as data protection is always on. 

How do you choose solutions for optimization? Any solution must conform to the following basic principles:

  1. Does the solution simplify operations?
  2. Can we use the same solution across all platforms and applications?
  3. Does the solution capitalize on existing assets?
  4. Can we leverage current policies and procedures?
  5. Can we implement it based on the savings it provides rather than relying on new budgets?

Where should you direct those questions? Start by assessing these areas:

  1. IT operations (provisioning, data mobility, application rollout and support)
  2. Network overview (LAN and WAN design and costs)
  3. Backup design and operation (Backup and recovery times, retention, archiving)
  4. Storage network infrastructure
  5. Disaster recovery capabilities (including application recovery)

Your assessment results should determine where you can make improvements to simplify operations, provide better means of protection and recovery for critical applications, and where costs and service levels can be improved. Use the optimized model of leveraging technical innovation to reduce costs as a guide on how to improve the way you use current investments and how they affect operational costs.  

You're better off making your optimization choices based on the merits of the technology and its applicability within your existing environment. Don't defer to pressures stemming from vendor relationships or internal political biases. If you can avoid that, you're more likely to reap the rewards of optimization, which include dramatic increases in operational efficiency and reduced overall operating expenses. 

The Reward:

  1. Reduce backup times by more than 70 percent
  2. Improve recovery times by 95 percent
  3. Reduce possibility of data loss
  4. Optimize use of existing storage infrastructure
  5. Improve DR capabilities and costs

The goal is to reduce IT expenditures for both capital costs (CapEx) and ongoing operational expenses (OpEx). It's an achievable pursuit for data center leaders willing to shift to a data center model that makes the most sense in the current economic environment.

Christopher Poelker is the author of Storage Area Networks for Dummies, the vice president of enterprise solutions at FalconStor Software, and deputy commissioner of the TechAmerica Foundation Commission on the Leadership Opportunity in U.S. Deployment of the Cloud (CLOUD²).  

Copyright © 2012 IDG Communications, Inc.

It’s time to break the ChatGPT habit
Shop Tech Products at Amazon