Green Room feature

The old adage ‘you can’t improve what you can’t measure’ seems particularly apt when it comes to bringing data centres up to a point where their power consumption, and corresponding carbon emissions, can be properly reported back to the business.

Frost Sullivan ICT practice research director, Arun Chandrasekaran, says that, in light of the impending carbon tax, it is essential for data centre owners, operators and enterprises to begin by actually measuring the power usage effectiveness (PUE) of their facilities.

In theory, all the power that goes into a data centre should go into the IT equipment located there, but in the real world, power distribution losses and energy siphoned off for use in cooling and lighting. This means that understanding PUE — the total facility power divided by IT equipment power — is the first stop in understanding data centre energy use.

“The ideal value for PUE should be one because all of the facility power in a real world scenario, or rather in an idealistic situation, should go towards IT power — but it doesn’t,” Chandrasekaran says. “At the end of the day, if you don’t even know what value it is, if you’re not even measuring it right, how can you improve on it?”

For Canberra Data Centres managing director, Greg Boorer, getting a grip on energy consumption means measuring power usage at multiple points across the facility; that is, multiple points along the “electrical consumption pathway”.

“What you’re trying to break down is how much energy actually gets to the IT equipment, how much energy then goes into the facilities, so that’s cooling, lighting, supporting infrastructure but not IT equipment,” he says.

“If you have multiple points of measurement — at different sub-distribution boards, at different sides of the UPSes (uninterruptible power supplies), at the power distribution level within each rack — you can actually determine where pockets of inefficiencies lie within your own facility, then you can take measures to remove or remediate any sort of inefficiencies.

“[But] the big challenge with all reporting now is separating IT electrical consumption from electrical consumption associated with things like your cooling, chillers, lighting, security systems and the like.”

Greening your data centre

It is no secret that cooling takes the lion’s share of power consumption in a data centre. According to Oracle master principal systems architect, Mike Coleman, as much as 60 per cent of the power to a data centre goes into cooling, rather than running, IT equipment. Hence, it is imperative for data centre operators to lower the amount of energy used in keeping kit cool.

The main issue, Coleman says, is underfloor cooling; a practice considered outdated but widely used in ‘traditional’ data centres. With underfloor cooling, cool air is pumped under the floor and drawn up to cool the equipment then expelled back into the room. What this does is mix hot air ported from IT equipment with the cool air from conditioning units making for a pretty inefficient system.

To add to this, the energy consumption — and corresponding heat output — of modern equipment racks has dramatically increased in recent years. Where underfloor cooling could cope with the equipment using around of five kilowatts per rack, today’s densities have increased as much as eightfold.

“Today’s equipment can be up to 30 to 40 kilo watts per rack, so you can’t cool that with underfloor cooling,” Coleman says. “The idea is to take the cooling to the source and what that actually means is the equipment in the rack that’s generating the heat needs to be cooled directly.”

Page Break

Underfloor cooling can also create hot spots, which are areas in a data centre that don’t get cooled properly as a result of the mixed cool and hot air.

To eliminate these hot spots and effectively cool data centres, the most energy efficient approach is to go modular, with its appeal in increased agility, scalability and direct in-rack cooling.

“Modular data centres, by their very nature of being, grow as the user base grows, because you’re exactly using what is needed for those set of users rather than trying to switch on everything,” Frost Sullivan’s Chandrasekaran explains.

With a modular cooling approach, the racks and equipment are arranged into points of delivery or PODs. That is, the racks are arranged within an enclosed space, with a door on each end of the hot aisle and a ceiling on top so that everything is contained. The cooling is then brought to each rack by overhead or in line.

“If you try and use the traditional methods of cooling, which is underfloor, then obviously there’s inefficiency there because you’re actually trying to cool the air in the room itself rather than remove the heat from the equipment,” Oracle’s Coleman says. “You will get on average 40 to 60 per cent reduction in power or air conditioning usage by doing modular design.”

Although the modular solution is best for Greenfield data centres, existing facilities can also adopt the modular design to minimise hot spots by organising racks and equipment into hot and cold aisles.

In most cases, a cold aisle operates at the front of the rack where air is drawn in, while the hot aisle is located at the back where hot air is vented, captured and exhausted from the building.

“What you’re actually doing is constantly drawing in cool air from the front, you’re containing the hot air, and then you’re expelling it from the hot aisle by some external extraction method, so you’re not giving the opportunity for the air to mix,” Coleman explains.

Data centre operators can also reduce cooling requirements by raising the temperature of their facilities, where the optimal range is between 20 and 27 degrees Celsius. However, alongside optimal levels there is also a safe range, which is approximately 31 degrees.

Even reducing the lighting level and installing sensors so that lights come on only when people are in the room will result in measurable savings, Coleman says. In addition, he says the physical consolidation of data centre resources and infrastructures, servers, and switches and moving to virtualisation will increase efficiency.

“The next phase then is to optimise the actual architecture itself, so we call that a service oriented architecture approach,” he says. “So you deliver IT as a service rather than a physical device. The next level from that is heading into what’s commonly called a Cloud these days… The whole thing then becomes a service. That’s the general direction of IT.”

For Frost Sullivan’s Chandrasekaran, the process of improving the efficiency of an existing data centre is, at its simplest, three-fold: Buy energy efficient equipment, retire old legacy hardware, and move to virtualisation.

“It starts with very simple things, like for one, I hope people will start buying energy efficient equipment, data networking equipment, storage equipment and so on,” he says. “The second step is to retire legacy systems. A lot of people don’t want to retire systems because they’re running fine, but it sucks a lot power. Of course the third, which has been talked about a lot, is the move to virtualisation [which] saves energy and it saves space as well in a data centre.”

While public debate about the carbon tax has focused on its negative aspects, Chandrasekaran views the incoming tax as an incentive and impetus to accelerate Green data centre practices.

“At the end of the day, cost minimisation is a huge focus,” he says. “The question is, ‘Why would you do it?’ unless there’s a compliance or strong organisation mandate to do it. You wouldn’t do it because it’s not in your best interest from a monetary standpoint, but maybe from a social standpoint.”

Related:

Copyright © 2011 IDG Communications, Inc.

Download: EMM vendor comparison chart 2019
  
Shop Tech Products at Amazon