Scottrade turns up the heat, saves energy

18.12.2008
in online brokerage 's data center -- and that's a good thing. The move has allowed the St. Louis-based company to reap enormous energy savings while increasing reliability.

Six months ago, hired to construct a computational fluid dynamics (CFD) model of its data center. The model provided a complete picture of thermal airflows in Scottrade's data center. Samuel Graves, chief data center mechanical engineer at Glumac, oversaw the effort. "Much can be learned from a thermal CFD model, and going forward, the model becomes an excellent tool to help determine the effectiveness of potential solutions," he says.

As is the case in many large data centers, Scottrade was overcooling the room. The solution: Fix the airflow problems and hot zones, and turn up the computer room air conditioning (CRAC) unit thermostat. That sounds scary, but Patterson says the recommendations cut power consumption by 8% and improved equipment reliability -- all without affecting the performance of the data center. Power and cooling infrastructure are a large piece of the data center's overall operating cost. The hard dollar savings from some fairly straightforward changes were "significant," Patterson says.

Scottrade didn't just manufacture those savings by retrofitting an old, poorly designed facility. Quite the contrary, Patterson achieved the efficiency gains in a that Scottrade had rolled out in 2007. The cost benefits weren't limited to just power and cooling bills: Scottrade also reduced the load on backup power systems and reduced the number of backup batteries needed.

The savings achieved by Scottrade are actually on the low side, says Graves. "Scottrade was already doing a lot of things right. Glumac has seen some data centers that, , achieve a 25% decrease in cooling costs."

Three steps to savings

Step 1: The CPD model identified three key areas for improving efficiency. First, it identified a that was floating in the upper half of the data center space. That hot layer started at a height of about 5.5 to 6 feet and extended all the way to the ceiling, some 10 feet from the floor. That meant that the equipment in the top of Scottrade's racks was in the hot air cloud.

There were other problems, too. That hot-air layer was circulating over the tops of the racks, spilling over from the hot aisle, which is supposed to return hot air to the air conditioning system, into the cold aisle, which is supposed to supply only chilled air from the CRAC units. As a result, equipment in the tops of the racks was running warmer than it should have been.

Step 2: The second issue was the configuration of the racks themselves. Not all racks were fully populated, but equipment was always concentrated at the top of the racks, where it was subject to those higher temperatures. In fact, says Patterson, the hottest running servers tended to be mounted at the top of racks, where cooling efficiency was lowest. To address that, Scottrade had lowered the computer room air conditioning system temperature settings, in effect overchilling the rest of the room. "Scottrade was running the overall data center temperatures colder than necessary to keep the temperatures at the top of the racks within acceptable ranges," Graves says.

Step 3: Finally, the balance between the heat load produced by the server racks and the quantity of air supplied to the cold aisle was out of whack. Engineers redistributed the perforated tiles on the aisle floor to match the output required. "A thermal balance was noticed immediately," Graves says.

Air conditioning systems perform most efficiently when the air temperature differentials are higher, so Glumac implemented changes that made the cold aisles colder and the hot aisles a few degrees warmer. "We weren't optimizing the heat-to-cooling ratio that the AC units needed. You have to get that balance," Patterson says.

To address that thermal layer problem, Glumac engineers adjusted the CRAC system by raising the height of the air-return intakes by 1.5 to 2 feet. That pushed the thermocline layer above the tops of the racks, providing a better thermal environment for equipment located there.

Next up: Once the airflow balance was achieved in the aisles, engineers turned their attention to what was inside the racks. "There's an optimal temperature point where you want your chips running," says Patterson. Scottrade tended to have the hottest, most power-hungry devices in the top of the racks, where they received the warmest air. Scottrade reorganized the racks, moving power-hungry servers lower to balance the heat distribution within the racks.

It also helps that Scottrade's new data center is using energy-efficient servers. The 1U and 2U and blade server models in use at Scottrade sport low-voltage processors, variable-speed fans that operate according to processing power consumption, and high-efficiency power supplies. (Those units come with embedded on ROM, making setup easier.) "It draws less energy, and it keeps the internal temperatures in the boxes cooler," Patterson says.

But there's another advantage to newer servers that data center managers may miss: They run fine at higher operating temperatures than the previous generation of equipment. That means that server racks can run warmer. "Data center operators who take advantage of these higher-temperature capabilities can gain significant energy efficiencies in their cooling infrastructure," says Graves.

Those changes "improved our power consumption [and] our air conditioning costs, and reduced our total costs of running our business," Patterson says. At Scottrade, low latency is critical to keeping its commitment to completing trades fast. It relies on the highest possible server performance to support split-second transactions for its customers. Fortunately, the redesign required no compromises: Moving to a hotter data center didn't reduce performance or affect the longevity of data center equipment, he says. Instead, he believes the changes improved reliability by keeping equipment within optimal operating ranges.

Patterson is taking the same approach at the company's new backup data center, which is located in Scottsdale, Ariz. Racks are already being assembled in the new facility, and the designs for which servers go into which racks are nearly complete. The servers will be installed in January, and if all goes according to plan, the new data center will go live next June. "We are looking at the hottest, biggest boxes we can put in," he says. "But keeping it cool won't be an issue."