By now, the energy-saving advantages of environmentally friendly efforts in IT are clear - even if companies don't always practice them. Even rather token efforts that make grossly inefficient data centers slightly less so can result in significant cost savings, given the escalating cost of electricity.
But it's clear that energy efficiency will become only more important in the years ahead. That's especially true for data centers, as the demand for IT horsepower increases. So what are the best next steps in efficient data center operation? And is there a place for renewable energy sources such as solar and wind in the current data center equation?
Measuring efficiency
The common metric used in measuring the energy efficiency of a data center is a simple one: PUE (power usage efficiency) = total facility power/IT equipment power. The desired PUE number is 1.0 - a one-to-one ratio between the energy being used to power the IT equipment and the total energy being expended in the data center facility. Unfortunately, most data centers operate at a PUE of about 2.0 at best, according to industry sources.
That's why vendors tout the PUE of their green compute technology. For instance, Hewlett-Packard claims its POD (performance-optimized data center) systems, which are self-contained high-end computing units, operate at a PUE of 1.2. That energy efficiency mainly comes from the systems' use of "innovative cooling techniques," says Jon Mormile, HP product marketing manager for POD. Keeping hot-running computer servers cool while they chug away is a major factor in the PUE discrepancy.