Coping with the data center power demands

03.04.2006
When Tom Roberts oversaw the construction of a 9,000-square-foot data center for Trinity Health, a group of 44 hospitals, he thought the infrastructure would last four or five years. A little more than three years later, he's looking at adding another 3,000 square feet and re-engineering some of the existing space to accommodate rapidly changing power and cooling needs.

As in many organizations, Trinity Health's data center faces pressures from two directions. Growth in the business and a trend toward automating more processes as server prices continue to drop have stoked the demand for more servers. Roberts says that as those servers continue to get smaller and more powerful, he can get up to eight times more units in the same space. But the power density of those servers has exploded.

"The equipment just keeps chewing up more and more watts per square foot," says Roberts, director of data center services at Novi, Mich.-based Trinity. That has resulted in challenges meeting power-delivery and cooling needs and has forced some retrofitting.

"It's not just a build-out of space but of the electrical and the HVAC systems that need to cool these very dense pieces of equipment that we can now put in a single rack," Roberts says.

Power-related issues are already a top concern in the largest data centers, says Jerry Murphy, an analyst at Robert Frances Group Inc. in Westport, Conn. In a study his firm conducted in January, 41 percent of the 50 Fortune 500 IT executives it surveyed identified power and cooling as problems in their data centers, he says.

Murphy also recently visited CIOs at six of the nation's largest financial services companies. "Every single one of them said their No. 1 problem was power," he says. While only the largest data centers experienced significant problems in 2005, Murphy expects more data centers to feel the pain this year as administrators continue to replenish older equipment with newer units that have higher power densities.

In large, multimegawatt data centers, where annual power bills can easily exceed US$1 million, more-efficient designs can significantly cut costs. In many data centers, electricity now represents as much as half of operating expenses, says Peter Gross, CEO of EYP Mission Critical Facilities Inc., a New York-based data center designer. Increased efficiency has another benefit: In new designs, more-efficient equipment reduces capital costs by allowing the data center to lower its investment in cooling capacity.

Pain points

Trinity's data center isn't enormous, but Roberts is already feeling the pain. His data center houses an IBM z900 mainframe, 75 Unix and Linux systems, 850 x86-class rack-mounted servers, two blade-server farms with hundreds of processors, and a complement of storage-area networks and network switches. Simply getting enough power where it's needed has been a challenge. The original design included two 300-kilowatt uninterruptible power supplies.

"We thought that would be plenty," he says, but Trinity had to install two more units in January. "We're running out of duplicative power," he says, noting that newer equipment is dual-corded and that power density in some areas of the data center has surpassed 250 watts per square foot.

At Industrial Light & Magic's brand-new 13,500-square-foot data center in San Francisco, senior systems engineer Eric Bermender's problem has been getting enough power to ILM's 28 racks of blade servers. The state-of-the-art data center has two-foot raised floors, 21 air handlers with more than 600 tons of cooling power and the ability to support up to 200 watts per square foot.

Nonetheless, says Bermender, "it was pretty much outdated as soon as it was built." Each rack of blade servers consumes between 18kw and 19kw when running at full tilt. The room's design specification called for six racks per row, but ILM is currently able to fill only two cabinets in each because it literally ran out of outlets. The two power-distribution rails under the raised floor are designed to support four plugs per cabinet, but the newer blade-server racks require between five and seven. To fully load the racks, Bermender had to borrow capacity from adjacent cabinets.

The other limiting factor is cooling. At both ILM and Trinity, the equipment with the highest power density is the blade servers. Trinity uses 8-foot-tall racks. "They're like furnaces. They produce 120-degree heat at the very top," Roberts says. Such racks can easily top 20kw today, and densities could exceed 30kw in the next few years.

What's more, for every watt of power used by IT equipment in data centers today, another watt or more is typically expended to remove waste heat. A 20kw rack requires more than 40kw of power, says Brian Donabedian, an environmental consultant at Hewlett-Packard Co. In systems with dual power supplies, additional power capacity must be provisioned, boosting the power budget even higher. But power-distribution problems are much easier to fix than cooling issues, Donabedian says, and at power densities above 100 watts per square foot, the solutions aren't intuitive.

For example, a common mistake data center managers make is to place exhaust fans above the racks. But unless the ceiling is very high, those fans can make the racks run hotter by interfering with the operation of the room's air conditioning system. "Having all of those produces an air curtain from the top of the rack to the ceiling that stops the horizontal airflow back to the AC units," Roberts says.

Trinity addressed the problem by using targeted cooling. "We put in return air ducts for every system, and we can point them to a specific hot aisle in our data center," he says.

ILM spreads the heat load by spacing the blade server racks in each row. That leaves four empty cabinets per row, but Bermender says he has the room to do that right now. He also thinks an alternative way to distribute the load -- partially filling each rack -- is inefficient. "If I do half a rack, I'm losing power efficiency. The denser the rack, the greater the power savings overall because you have fewer fans," which use a lot of power, he says.

Bermender would also prefer not to use spot cooling systems like IBM's Cool Blue, because they take up floor space and result in extra cooling systems to maintain. "Unified cooling makes a big difference in power," he says.

Ironically, many data centers have more cooling than they need but still can't cool their equipment, says Donabedian. He estimates that by improving the effectiveness of air-distribution systems, data centers can save as much as 35 percent on power costs.

Before ILM moved, the air conditioning units, which opposed each other in the room, created dead-air zones under the 12-inch raised floor. Seven years of moves and changes had left a subterranean tangle of hot and abandoned power and network cabling that was blocking airflows. At one point, the staff powered down the entire data center over a holiday weekend, moved out the equipment, pulled up the floor and spent three days removing the unused cabling and reorganizing the rest. "Some areas went from 10 [cubic feet per minute] to 100 cfm just by getting rid of the old cable under the floor," Bermender says.

Even those radical steps provided only temporary relief, because the room was so overloaded with equipment. Had ILM not moved, Bermender says, it would have been forced to move the data center to a collocation facility. Managers of older data centers can expect to run into similar problems, he says.

That suits Marvin Wheeler just fine. The chief operations officer at Terremark Worldwide Inc. manages a 600,000-square-foot collocation facility designed to support 100 watts per square foot.

"There are two issues. One is power consumption, and the other is the ability to get all of that heat out. The cooling issues are the ones that generally become the limiting factor," he says.

With 24-inch floors and 20-foot-high ceilings, Wheeler has plenty of space to manage airflows. Terremark breaks floor space into zones, and airflows are increased or decreased as needed. The company's service-level agreements cover both power and environmental conditions such as temperature and humidity, and it is working to offer customers Web-based access to that information in real time.

Terremark's data center consumes about 6 megawatts of power, but a good portion of that goes to support dual-corded servers. Thanks to redundant power designs, "we have tied up twice as much power capacity for every server," Wheeler says.

Terremark hosts some 200 customers, and the equipment is distributed based on load. "We spread out everything. We use power and load as the determining factors," he says.

But Wheeler is also feeling the heat. Customers are moving to 10- and 12-foot-high racks, in some cases increasing the power density by a factor of three. Right now, Terremark bills based on square footage, but he says collocation companies need a new model to keep up. "Pricing is going to be based more on power consumption than square footage," Wheeler says.

According to EYP's Gross, the average power consumption per server rack has doubled in the past three years. But there's no need to panic -- yet, says Donabedian.

"Everyone gets hung up on the dramatic increases in the power requirements for a particular server," he says. But they forget that the overall impact on the data center is much more gradual, because most data centers only replace one-third of their equipment over a two- or three-year period.

Nonetheless, the long-term trend is toward even higher power densities, says Gross. He points out that 10 years ago, mainframes ran so hot that the systems moved to water cooling before a change from bipolar to more efficient CMOS technology bailed them out.

"Now we're going through another ascending growth curve in terms of power," he says. But this time, Gross adds, "there is nothing on the horizon that will drop that power."

Sidebar

Doing the math

Here's how data center power costs can add up:

Power required by data center equipment: 3mw

Power-distribution losses, cooling, lighting: 3mw

Total power requirement: 6mw

Cost per kilowatt-hour: $0.06

Annual electricity cost for 24/7 operation: $3.15 million

Annual savings from a 10 percent increase in efficiency: $315,000

In a typical data center, every watt of power consumed by IT equipment requires another watt of power for overhead, including losses in power distribution, cooling and lighting. Depending on efficiency, this "burden factor" typically ranges from 1.8 to 2.5 times more power.

Assuming a 1:1 ratio, a 3mw data center will require 6mw of power to operate. At 6 cents per kilowatt-hour, that adds up to $3.15 million annually. However, in some areas of the country, average costs are closer to 12 cents per kilowatt-hour, which would double the cost. With those numbers, even a modest 10 percent improvement in efficiency can yield big savings.

With average per-rack power consumption tripling over the past three years, skyrocketing power bills are turning the heads of chief financial officers, particularly in companies with large data centers. Such scrutiny is less prevalent at financial institutions, where reliability is still the most important factor. But other industries, such as e-commerce, are much more sensitive to the cost of electricity, says Peter Gross, CEO of EYP Mission Critical Facilities.

How many servers does it take to hit 3mw? Assuming today's average of 5kw per rack, you would need 600 cabinets with 15 servers per enclosure, or 9,000 servers total. A new data center designed for 100 watts per square foot would require 30,000 square feet of raised-floor space to accommodate the load.

Sidebar

Eight tips for a more efficient data center

Review your assets. Review the status of all applications and eliminate dead wood by shutting down old servers that aren't needed. This can cut power consumption in some organizations by as much as 30 percent.

Virtualize. Consolidate servers using virtualization and asset management. Shed servers that are running applications that are no longer used. Virtualization allows greater utilization of existing servers.

Go green. Specify high-efficiency power supplies for servers in requests for proposals. Ask for systems that deliver a higher performance per watt.

Consider DC. Rack systems with centralized DC power cut energy use and heat inside the cabinet by moving power conversion away from servers.

Batten down and modernize. Tighten up racks by using blanking plates and sealing holes to prevent air leaks. Consider newer rack designs that optimize airflows.

Clean up downstairs. Clear raised-floor passages of cabling or other obstructions to airflow.

Go to 208v. Upgrade to more efficient 208-volt, three-phase power, if you haven't already. The higher voltage requires a lower current, which reduces losses. The small savings of 1 percent or 2 percent add up in a large data center.

Coordinate cooling. In data centers with high-density heat loads, consider hiring a professional to measure airflows and correct air conditioning problems. Simply adding more air conditioning doesn't always help.

Sources: Robert Frances Group; ILM; Insight64