Skip the navigation

Sandy wounded servers, some grievously, say services firms

November 7, 2012 11:58 AM ET

"Using a baseline of 68 degrees fahrenheit as the benchmark or baseline for failures, a temperature of 104 degrees represents a 66% increase in failures. This seems like a big increase in failures. However, if the average failure rate is 4%, then operating at 104 degrees would result in the failure rate rising from 4% to 7%," said Beaty.

However, he noted that the failure rate is also based on duration.

There are 8,760 hours in a year, he said. "If 10% or 87.6 hours were at 104 degrees and the remainder at 68 degrees, the total failure rate for the year would be a ratio or weighted average. This means the 66% rise would be 6.6% rise in failures. At a 4% failure base, this means 4.66% failures rather than 4%," said Beaty.

Vendors can use equipment built to withstand much higher temperatures. All equipment is manufactured to Class A1 standards and has an upper limit of 89.6 degrees, and increasingly equipment is being made to meet Class A2 standards, up to 95 degrees.

There has been a trend to increase data center temperatures as part of push to use less energy, and it is becoming more common for data centers to operate at 72 degrees to 75 degrees, said Beaty.

There have been experiments by IT managers to put servers in sheds and tents, exposed to temperature and humidity extremes. More often than not these limited efforts surprise people with the durability of the equipment.

Nonetheless, equipment that is operating at higher than recommended temperatures, such as a hot spot in a data center, could see failures, said Scott Kinka, the CTO of cloud services company, Evolve IP.

"Heat equals age in the computer component world," said Kinka, and equipment that has been exposed to high temperatures, such as what may occur in a data center hot spot, may be at a higher risk of component failure at some point and a manager may see an uptick in component problems.

But it may be hard to trace back, exactly, the root cause of the failure because it could happen months later, said Kinka. "The hard part about this one is you are just not going to know," he said.

Patrick Thibodeau covers SaaS and enterprise applications, outsourcing, government IT policies, data centers and IT workforce issues for Computerworld. Follow Patrick on Twitter at Twitter@DCgov, or subscribe to Patrick's RSS feed Thibodeau RSS. His e-mail address is pthibodeau@computerworld.com.

Read more about Data Center in Computerworld's Data Center Topic Center.



Our Commenting Policies
Internet of Things: Get the latest!
Internet of Things

Our new bimonthly Internet of Things newsletter helps you keep pace with the rapidly evolving technologies, trends and developments related to the IoT. Subscribe now and stay up to date!