Dead servers, bad engineering and data centers

23.10.2006

Finally, IT managers can reduce bloat-wear -- software with inefficient code requiring a bigger processor to get through it.

And what about cooling? Most data centers are consuming from 20 percent to 40 percent more energy then they should because the cooling systems are not well optimized. For instance, here is a common issue in a computing room with multiple cooling units: If you go up to the face plate of the cooling unit, you may see that one unit is dehumidifying and the unit immediately adjacent to it is humidifying, so you have dueling cooling units.

In terms of cooling, what issues do users have with vendors? Are there standards problems? How mature is the technology? In 2000, at 500 watts to 1,000 watts per cabinet, you do anything and successfully cool it. You could be totally incompetent in your engineering and you could successfully cool it. You may not have done it as energy efficiently but that was never measured so nobody knew how badly it was done. As the density per cabinet increases, the mask is ripped off and a user's responsibility for doing the engineering in the computer room becomes apparent. For computer rooms with raised floors, the institute has promoted hot aisles and cold aisles for over 10 years. It's accepted as an optimal solution for up to 3KW to 4KW in a cabinet.

But you go into computer room after computer room and you see that the equipment is lined up facing up one direction. As a result people have hot spots. And if you have hot spots, you go out and buy more air conditioning.