Green memories accelerate ROI for data centers

15.02.2011
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Hundreds of thousands of KWh are being consumed by the use of memory components in servers today. By adopting more energy-efficient components in optimized architectures, such as lower voltage and advanced (SSDs), data centers can drastically reduce power consumption and associated energy costs.

The most recent U.S. Environmental Protection Agency's (2007) report on server and energy efficiency shows that U.S. data center power consumption will surge from 61 billion kWhr in 2006 to about 100 billion kWhr in 2011. Data center power requirements are expected to increase as much as 20% per year for the next couple of years.

THE BASICS:

Subsequently, data center power costs have skyrocketed because, for every watt of power, between 2 and 2.5W is needed to cool it. IDC analysis shows that server energy expense in 2009 accounted for 75% of the total hardware cost. The firm also found that the energy expense to power and cool servers has jumped by 31% in the last five years.

Various studies show that memory has become one of the biggest power consumers in the server space, which adds substantially to energy costs, including cooling. In addition, is adding to the memory challenge, as it grows in popularity in helping to drive down data center cost. The move to virtualization translates into more memory per server, thereby increasing the use of DRAM and raising the total density per storage system.