Clearing up misconceptions about mixing blade servers and virtualization

Combining the two technologies provides 'double the benefits,' consultant says

Barb Goldworm is the founder and president of Focus Consulting, a research and consulting firm in Boulder, Colo., that concentrates on systems and storage. She also is the conference chair of the upcoming Server Blade Summit, a symposium that will be held next month in Anaheim, Calif.

Goldworm recently co-authored a book called Blade Servers and Virtualization: Transforming Enterprise Computing While Cutting Costs (John Wiley & Sons, 2007). In an interview with Computerworld, Goldworm said that while researching the book, she talked to corporate users who had experienced how well blades and virtualization technology work together -- despite what she called outdated information that is preventing some would-be implementers from going that route. Excerpts from the interview follow:

What makes these two technologies so complementary? Server virtualization is great for taking a bunch of servers that are running under capacity, then consolidating them onto one physical server. You have fewer physical servers to run from a power, energy and management perspective. If you implement server virtualization on a blade platform, you double the benefits. For the same unit of work, you will get more benefit by implementing the technologies together.

So why is the synergy between blades and virtualization a surprise to some people? There were some folks writing about why you shouldn't virtualize on blades; it was a holdover from the early days of both technologies. Back then, there were some good reasons not to use them together. But that's not the case any more. So much of the technology is changing so fast, and both address a lot of the same issues from an IT manager's perspective, and they work very well together.

What about the heat issues that come with running a lot of blade servers? Doesn't that type of density lead to problems? There's a misconception that blades automatically generate more heat. But when you compare a single blade to a single rack server, the blade actually requires less power and less cooling than the rack server. That said, blades are more dense -- you can pack more blades into the same physical footprint as one rack server, so the cooling requirements per footprint do increase.

So yes, you do need more cooling, even though blades are more efficient. Luckily, a lot of the thermal advances from the server vendors are coming in the blade arena. IBM and HP have both significantly reduced the heat generated in their most recent blade servers compared to earlier generations. And there's a lot of interesting software coming out -- things that will monitor the thermal envelope. Some of the management tools that have been available for mobile technology are now becoming available for blades and the data center, such as if the server is running too hot, it can be powered down gracefully.

What other advice do you have for people who are concerned about the heating and cooling issues? There are some physical things you can do, like implement hot and cold aisles in the data center, to manage the airflow. If you have a raised floor, look underneath it -- there may be a lot of stuff under those floor tiles that can block or restrict the airflow. Ask your vendor for an energy audit; many of the enterprise vendors or their partners provide this service to help customers understand where they can do even simple things to help.

Are there applications that aren't good candidates for virtualization? People have started to figure out that very large databases may not be good to virtualize because of their I/O aspects. You're not going to change the server ratio -- you can virtualize your database applications, but you're not going to get 15 extremely large databases on one server [because of performance or response issues]. The only real reason it may make sense is from a disaster recovery benefit. And that's a strong secondary reason -- to be able to recover your information quickly [in case of a hardware failure].

Sometimes virtualization makes sense, and sometimes it doesn't. We advise customers to choose their virtualization candidates very carefully. And when you're doing a physical-to-virtual conversion, make sure you have a way back in case it fails. Also, as you go into virtualization and are able to create new servers relatively easily, that can lead to "virtual" server sprawl. You need to be able to deprovision virtual servers. It's not about the tools as much as it is about the policies, the way you set up the virtual environment.

What's ahead in virtualization? The whole market will change in the next two or three years. There are a lot more players -- AMD and Intel are adding virtualization to their chips, Microsoft's got it coming as part of Longhorn, Novell's got it as part of SUSE Linux. Virtualization will appear in lower layers -- Hitachi's got server virtualization as part of its boot process. So your server will automatically start up in a virtualized environment, via a hypervisor.

As the technology becomes entrenched, the market will shift and it will be all about the management. Today, VMware has the lead, but soon we'll start to deal with things like how I can manage my physical and virtual environments with the same software tools.

8 highly useful Slack bots for teams
  
Shop Tech Products at Amazon