Skip the navigation

Power Over Ethernet: Promise and Problems

By Sandra Gittlen
May 17, 2006 12:00 PM ET

Computerworld -  One technology that IT managers should start to dig into, if you haven't already, is power over Ethernet. As devices stray further and further from power outlet locations, this emerging technology is gaining traction throughout the enterprise.

IP phones, wireless access points, security cameras, card scanners and other devices draw power from switches through standard Ethernet cabling, thus the name "power over Ethernet."

Today, the most power a device can pull through that cable is just over 15 watts. But the IEEE is working on a standard that would boost that wattage to around 50. This will open it up to even more applications.

As attractive as this all seems, power over Ethernet causes challenges for wiring closet and data center operations -- a part of the enterprise that doesn't need any more power problems. There are many issues to consider, including the heating and cooling of the switches, backup power supplies, and the actual load each switch can handle.

In fact, AFCOM, an association for data center professionals, recently disclosed its top five predictions for the future of the data center industry and among those was one regarding power. "Over the next five years, power failures and limits on power availability will halt data center operations at more than 90% of all companies," AFCOM predicts.

I recently spoke with Rick Sawyer, director of data center technology for American Power Conversion (better known as APC) and a board member of AFCOM'S Data Center Institute, about the state of power over Ethernet and how it ties into overall power concerns.
 
How do you perceive the maturity level for power over Ethernet today?
It is really in the infancy stage and limited to low-power applications.

What are some considerations when rolling out this technology?
Since the capacity to deliver power over the Internet is very limited (in the milliamp ranges), the primary consideration is to know what the application electrical load is, and how all of the loads aggregate on the system, and if the system is capable of delivering that degree of power.

Are there requirements that need to be met depending on the application?
Obviously the condition of the power is an issue -- filtering unwanted power characteristics whether from the source, the conducting network or from the loads themselves. Filtering is potentially a huge issue -- how do you protect the network from induced power problems from all of the attached loads. For instance, what if there is a lightening strike as some point on the network that transmits an impulse through the power system to the server sources? You could have the network fail from an externally connected device, potentially ruining millions of dollars of attached hardware.



Our Commenting Policies