
Data centres are wasting energy by producing far more cooled air than necessary according to David Hogg, managing director, 8 Solutions.
9 November 2015 | By David Hogg
Despite rising energy costs, data centres are spending vast amounts of money on cooled air to ensure there is no impact on data integrity, or even a loss of functionality on the expensive technology installed.
But to what extent is such an investment being squandered?
In an audit of 20 data centres across Europe conducted over a six-month period by 8 Solutions, the technical cleaning business, nearly all were found to be using more energy than was required. Indeed in some cases they were producing nearly four times more cooled airflow than is needed to maintain a supply temperature of 27 degrees C - the figure generally accepted for equipment to run at optimum levels.
And this figure is supported by other sources: Upsite Technologies, a company that specialises in finding cooling solutions within data centres, conducted a similar programme of 45 audits in the US and
found that data centres were producing on average some 3.9 times the amount of cooled air really needed.
It is, of course, vital to take steps to mitigate the risk of down time or equipment failure within critical environments. But that is not to say that facilities managers should be throwing good money after bad for want of a few simple actions.
As new IT equipment is added to data centres, the procedure to maintain the correct temperature is usually to increase the cooling by adding further cold air supply capacity to the environments. Such an investment, however, is rarely warranted; most data centres have sufficient capacity already, it is just that the current cold air being generated is being wasted, and, critically, is not being directed to the technology that needs it most.
So what are the main areas a facilities manager should be looking at to understand where his/her investment is, literally, disappearing into thin air?

Unsealed firewalls
Some, such as unsealed firewalls, for example, should be immediately obvious. Data Centres generally have a positive pressure over the surrounding environments (in layman's terms that means that the pressure in the area where the equipment is housed is higher than in the surrounding rooms). This ensures that any contamination is literally 'blown away' so that it cannot cause any damage. A firewall that is not sealed properly impacts the positive air 'balance', and this not only increases the chance of contamination but it also allows cooled air (which is expensive) to escape what should be a controlled environment with comparative ease.
Unsealed cable cut outs have a similar effect, releasing air into areas that often do not require it. Cable cut outs are generally holes put into the flooring to allow cables from the sub floor into the racks to 'feed' equipment. These cut outs are on the exhaust side of the rack/cabinet-based equipment (plugged into the rear, passed the circuit boards, power supplies and fans). If cold air is passing to the rear of equipment without flowing through it, then it is essentially being wasted.
Poor management of hot/cold aisles, including having grilles located within the hot aisles, and IT equipment installed in reverse can also have an impact. Mixing hot and cold air reduces what is known as the 'Delta T' on the air conditioning (AC) unit. (The Delta T is the difference in the temperature of air going in and coming out of the AC unit. For optimum efficiency the Delta T should be circa 12 degrees C; in some cases 8 Solutions have found a Delta T of less than one degree.)
The hotter the air returning to the AC unit after cooling the equipment the more efficiently the AC unit can run in cooling the air to the required temperature before re-supplying it to the data centre. The main aim should be to completely separate the cold supply air and the hot return air. If you introduce cold air where you shouldn't eg (into the hot aisle) or hot air where you shouldn't (eg into the cold aisle) the mixing of the hot and cold air unnecessarily reduces the efficiency of the cooling units substantially.
Bypass air
Bypass air - the name given to cooling air that circulates back to the air handling unit (AHU) without going anywhere near the IT equipment - can similarly be an issue. Any gaps in the equipment cabinets or missing racks create the potential for cold air to bypass the equipment and become contaminated with warmer air with the same impact (ie reduced efficiency of the AC unit) as identified above.
An incorrect airflow balance between the supply (ie the installed capacity) and demand (ie the IT equipment installed) is also a concern. If the actual load of the equipment in a data centre is 100kW you should only really need to supply 100kW of cooling to maintain the desired temperature. Data centres generally provide considerably more cooling (and the commensurate energy) than is actually required - in some cases more than four times the KW output needed. (N+1, N+2 or 2N are the general redundancy parameters where N = Load. So a 2N data centre at a demand of 150kW would have a supply capacity of 300kW or 2 x demand.) This is due almost entirely to the lack of precision airflow and airflow balancing.
So what are the benefits of getting the balance right? With better airflow management, it has been estimated that data centres can make average energy savings of £48 metre sq per annum. Put another way, that means that a typical 500 metre sq data centre can save £24,000 per annum and show an improvement in power usage effectiveness (PUE) that delivers a return on investment of between 12-24 months.
Recent reports by the Uptime Institute, a leading consortium for the enterprise data centre industry, conclude the same: it suggests that where average self-reported PUE levels have reduced from 2.5 in 2007 to 1.89 in 2011, they reduced still further to 1.65 in 2013, with airflow optimisation identified as the main contributor.
