Data Centers – Where Does All the Energy Go?

by Jenn Cano on May 29, 2012

Today’s on-demand society assumes nearly universal immediate access to real-time data and analytics in a resilient, secure environment. Anything short of that standard is unacceptable. These demands are being driven by a proliferation of data sources, mobile devices, radio frequency identification systems, unified communications, Web 2.0 services and technologies such as mashups. These rising expectations are creating demands of data centers that IT administrators are challenged to satisfy.  The impact of rising data center energy consumption and rising energy costs has elevated the importance of data center efficiency as a strategy to reduce costs, manage capacity and promote environmental responsibility.

To understand how to reduce energy consumption it is imperative to learn where and how energy is used in a data center

The first key aspect to take a look at is how energy is distributed between IT equipment and supporting facilities.  Or in other words, which portion of the energy is being consumed by servers, storage and network equipment as opposed to power, cooling and lighting.  Research shows that in a typical non-optimized data center the IT equipment load consumes about 45% while the supporting facilities consumes 55%.  That means that 55% of the energy that is brought into a data center is not producing calculations, data storage, and so forth.

 

 

 

The second key aspect is to understand how energy is distributed between the different components of the IT equipment.  Specifically,  within each specific piece of IT equipment what is consuming more, the processor, memory, disk, fan, power supply, etc.  In a typical server the processor uses only 30% of the energy.  The rest of the system consumes the remaining 70% of the energy!  Clearly, it’s imperative that efficient hardware design is considered when planning out a new piece of equipment for your data center.

 

 

 

 

The final aspect to understand is how the energy in a data center is allocated to produce business results.  Often times, idle resources are powered on using energy without producing results.  Commonly, servers are underutilized yet they are still consuming the same amount of energy as if they were running at 100%.  Research shows that a typical server only utilizes 20% of its capacity.  This creates a big issue as a lot of energy is wasted on non-business purposes wasting a major investment. 

 

 

Power and cooling costs for data centers have skyrocketed by 800 percent since 1996, and the escalating costs see no end in sight, yet data center resources have low utilization (many below 20 percent).  Over the next five years, industry watchers predict that U.S. enterprise data centers will spend twice as much on energy costs as on hardware and twice as much as they currently do on server management and administrative costs. Moreover, many data centers are realizing that even if they are willing to pay for more power, capacity constraints on the grid mean that additional power is unavailable.

By understanding where all the energy is going in your data center you are on the road to significantly cutting consumption and reducing your overall data center costs.  Through analysis of your data center it is possible to reduce your equipment and floor space usage by up to 65% and cut your energy consumption in half. 

If you would like to find out more about data center consolidation you can download our Federal Data Center Consolidation and Data Silo Removal Case Study. 

Or to find out more about Data Center Relocation you can download our whitepaper “How to Carefully Plan & Execute a Data Center Move – Reduce Data Center Relocation Risks”

Topics: Data Centers, Data Center Relocation, data center consolidation, Data Center Move

Recent Posts

Topic Cloud

Popular Posts

!-- jQuery 3 -->