The average PUE is 1.8 -or 2.5 -or 1.91

Let's just call it 2.0

May 10, 2011

Efficiency Metrics are the means, not the ends

The standard metric for determining datacenter energy efficiency is known as Power Usage Effectiveness, or PUE. It's a comparison of total power usage (usually based on the monthly energy bill) to the amount of power used specifically by the IT equipment. In essence, it is a mathematical ratio identifying how much of total power usage is being lost to climate control and ambient heat, and since power lost isn't power used, the goal is to get the number down as close to 1 as possible.

In practice, the number too often becomes the focus rather than the overall efficiency it represents. Not savings, strictly speaking, because a lower ratio doesn't use less power, it uses the same unit of power more efficiently; the implication is that by using power more efficiently you can in turn use less of it overall. Since that's beyond the scope of PUE it often falls outside the scope of the team assigned to monitoring and, where possible, improving it, but in the end the savings is the whole point of addressing it. If you're using good data on what power your IT equipment is consuming, addressing PUE can be instrumental in getting a handle on spiraling energy and cooling costs. Getting good usage data from your equipment isn't always as simple to calculate as it ought to be, considering how critical that information is to correctly estimating PUE. Finding out how much you're using, and paying for, is more straightforward.

Where attention to the PUE rating can go astray, however, is when it's used as a comparative tool to guide the purchase of datacenter hardware, or when comparing the public claims of efficiency consultants. Datacenter Knowledge recently posted the findings of the Uptime Institute regarding improving PUEs in the datacenter, findings that illustrate some of the muddiness of this issue well. In short, the institute has found that the average datacenter PUE was 2.5 but has improved to 1.8 in 2010, which is an obvious improvement for datacenters, but which has some effect on the products of various OEMs that tout their superior PUE ratings over 'the average'. This average either goes unspecified, or is cited only in the fine print. Apart from various polls and surveys, the industry benchmark has long been 2.0 as an ad hoc guideline to determine whether your datacenter is running efficiently or running hot (or eating electricity to be kept too cold, as the case might be).

The PUE story is often too short

The vendors' stories lose much of their meanings when a lengthy explanation is reduced to a bullet point or an Energy Star sticker. Is a "20% improvement over average PUE" better than a PUE target of 1.91? There's simply no way to tell without digging into the fine detail to discover what number is considered average, and further to adapt that number to your existing infrastructure where it may not be entirely relevant anyway.

As a means of increasing efficiency, focus on PUE has its place, but monthly energy bills are not the only recurring bill that an IT team can successfully reduce, nor are they even the largest bill offering the proportionally largest savings in most cases. The primary source of inefficient spending in a given datacenter is in licensed but unused software programs, and in annual software and hardware maintenance fees. A higher degree of savings and efficiency can result from a close review of these negotiable annual burdens than will come from a laborious fine-tuning of the datacenter PUE from 2.0 down to 1.91. Ideally a datacenter can benefit from the optimization of both PUE and licenses/maintenance, but in general terms PUE improvements are a smaller 'bang for the buck' savings initiative in all but the most grossly inefficient datacenters.