Coping with the data center power demands

03.04.2006

ILM spreads the heat load by spacing the blade server racks in each row. That leaves four empty cabinets per row, but Bermender says he has the room to do that right now. He also thinks an alternative way to distribute the load -- partially filling each rack -- is inefficient. "If I do half a rack, I'm losing power efficiency. The denser the rack, the greater the power savings overall because you have fewer fans," which use a lot of power, he says.

Bermender would also prefer not to use spot cooling systems like IBM's Cool Blue, because they take up floor space and result in extra cooling systems to maintain. "Unified cooling makes a big difference in power," he says.

Ironically, many data centers have more cooling than they need but still can't cool their equipment, says Donabedian. He estimates that by improving the effectiveness of air-distribution systems, data centers can save as much as 35 percent on power costs.

Before ILM moved, the air conditioning units, which opposed each other in the room, created dead-air zones under the 12-inch raised floor. Seven years of moves and changes had left a subterranean tangle of hot and abandoned power and network cabling that was blocking airflows. At one point, the staff powered down the entire data center over a holiday weekend, moved out the equipment, pulled up the floor and spent three days removing the unused cabling and reorganizing the rest. "Some areas went from 10 [cubic feet per minute] to 100 cfm just by getting rid of the old cable under the floor," Bermender says.

Even those radical steps provided only temporary relief, because the room was so overloaded with equipment. Had ILM not moved, Bermender says, it would have been forced to move the data center to a collocation facility. Managers of older data centers can expect to run into similar problems, he says.