Blade servers: Early adopters offer their tips, tricks

06.02.2007

Although OpSource is a service provider, in-house IT staffers might keep this situation in mind especially if they engage in chargeback or other budgeting practices that require user departments to pay for the IT resources they consume.

Rowell says there are two primary drivers for a move to blades: the number of servers required to support today's applications, and the increase in CPU and memory needed to support those applications. "Faster processsors and larger memory chips that come in these servers need more power to run. This combination has created a multiplier effect on the power requirements of data center deployments," he says.

To ensure that they are on target when purchasing equipment, Rowell says his team uses software tools to do a CPU/memory to watts analysis. "It typically requires three times the server CPU/memory capabilities to run an application today than was required in 2001," he explains.

Cerner's Smith says there are other considerations with blades, too, such as rack size. "Depending on how many chassis you put in a rack, they are getting taller. If you don't plan for it, the doors into the rooms might not be tall enough. We've had to replace some doors," he says. The height also poses a problem for cabling. "We do our cable management overhead to make sure we have enough room," he says.

There are some Band-Aid measures that companies can put in place to ease blade servers' power and cooling burden on the data center. "You can leave blank floor tiles around the racks to get cold air in; you can get a back door that sends heat out of the room; and you can bring water into the data center to cool it. There are lots of work-arounds," Smith says.