Data center fabrics catching on, slowly
It takes some planning -- and expense -- to revamp switching gear at the enterprise level.
Computerworld - When Government Employees Health Association (GEHA) overhauled its data center to implement a fabric infrastructure, the process was "really straightforward," unlike that for many IT projects, says Brenden Bryan, senior manager of enterprise architecture. "We haven't had any 'gotchas' or heartburn, with me looking back and saying 'I wish I made that decision differently.'"
GEHA, based in Kansas City, Mo., and the nation's second largest health plan and dental plan, processes claims for more than a million federal employees, retirees and their families. The main motivator behind switching to a fabric, Bryan says, was to simplify and consolidate and move away from a legacy Fibre Channel SAN environment.
When he started working at GEHA in August 2010, Bryan says he inherited an infrastructure that was fairly typical: a patchwork of components from different vendors with multiple points of failure. The association also wanted to virtualize its mainframe environment and turn it into a distributed architecture. "We needed an infrastructure in place that was redundant and highly available," explains Bryan. Once the new infrastructure was in place and stable, the plan was to then move all of GEHA's Tier 2 and Tier 3 apps to it and then, lastly, move the Tier 1 claims processing system.
GEHA deployed Ethernet switches and routers from Brocade, and now, more than a year after the six-month project was completed, he says they have a high-speed environment and a 20-to-1 ratio of virtual machines to blade hardware.
"I can keep the number of physical servers I have to buy to a minimum and get more utilization out of them," says Bryan. "It enables me to drive the efficiencies out of my storage as well as my computing."
Implementing a data center fabric does require some planning, however. It means having to upgrade and replace old switches with new switching gear because of the different traffic configuration used in fabrics, explains Zeus Kerravala, principal analyst at ZK Research. "Then you have to re-architect your network and reconnect servers."
Moving flat and forward
A data center fabric is a flatter, simpler network that's optimized for horizontal traffic flows, compared with traditional networks, which are designed more for client/server setups that send traffic from the server to the core of the network and back out, Kerravala explains.
In a fabric model, the traffic moves horizontally across the network and virtual machine, "so it's more a concept of server-to-server connectivity." Fabrics are flatter and have no more than two tiers, versus legacy networks, which have three or more tiers, he says. Storage networks have been designed this way for years, says Kerravala, and now data networks need to migrate this way.
One factor driving the move to fabrics is that about half of all enterprise data center workloads in Fortune 2000 companies are virtualized, and when companies get to that point, they start seeing the need to reconfigure how their servers communicate with one another and with the network.
"We look at it as an evolution in the architectural landscape of the data center network," says Bob Laliberte, senior analyst at Enterprise Strategy Group. "What's driving this is more server-to-server connectivity ... there are all these different pieces that need to talk to each other and go out to the core and back to communicate, and that adds a lot of processing and latency."
Virtualization adds another layer of complexity, he says, because it means dynamically moving things around, "so network vendors have been striving to simplify these complex environments."
When data centers can't scale
As home foreclosures spiked in 2006, Walz Group, which handles document management, fulfillment and regulatory compliance services across multiple industries, found its data center couldn't scale effectively to take on the additional growth required to serve its clients. "IT was impeding the business growth," says Chief Information Security Officer Bart Falzarano.
The company hired additional in-house IT personnel to deal with disparate systems and management, as well as build new servers, extend the network and add disaster recovery services, says Falzarano. "But it was difficult to manage the technology footprint, especially as we tried to move to a virtual environment," he says. The company also had some applications that couldn't be virtualized that would have to be managed differently. "There were different touch points in systems, storage and network. We were becoming counterproductive."
- How WAN Optimization Helps Enterprises Reduce Costs If you wanted to break down innovation into a tidy equation, it might go something like this: Technology + Connectivity = Productivity. Productivity...
- Four Little-Known Ways WAN Optimization Can Benefit Your Organization WAN optimization has evolved into a complete system that optimizes traffic across a broad range of most popular applications while providing deep visibility...
- SharePlan Security SharePlan is a continuous, secure, enterprise-ready file sync and share platform that facilitates smart, real-time collaboration across all devices.
- Three Ways Your DNS Can Impact DDoS Attacks Domain Name System (DNS) plays a big role in consumers' day-to-day Internet usage and is a critical factor when it comes to distributed...
- Online Video and Web Traffic: Sochi 2014 Winter Olympic Games Over 25 leading global broadcasters worked with Akamai to deliver the action, excitement and inspiration of Sochi because they understand online viewers expect...
- Video surveillance for IT: maximum image quality, minimum bandwidth Join us on Thursday, May 8th at 1 p.m. EST when Willem Ryan, Senior Product Marketing Manager at Avigilon, will discuss how IT... All Networking White Papers | Webcasts