Data centre operators will depend on good NABERS

The National Australian Built Environment Rating System (NABERS) is a well-regarded framework
for assessing the environmental performance of buildings, including data centres. However, while
the current standard is being updated, Australia is already potentially being overtaken by countries like Germany and Singapore (via Green Mark) and the US, which are producing more
comprehensive ways to measure data centre impacts.

“I think NABERS is on the right track especially splitting the ratings system into three components
IT equipment, infrastructure and the whole facility,” said EkkoSense GM ANZ and APAC Robert
Linsdell. “It is important – no essential – that we have a standards and approval system that is line
with other countries.”

“Europe recently has updated its standards and there are series of ISO 30134 where the fuller list
of KPIs are measured, including water, renewable energy measurements, cooling efficiency and
when used together are a more accurate representation,” he added.

Nothing to panic about yet, but there is an on-going global challenge for ratings bodies and
regulators which is that the industry is moving faster than they can keep up with – predominantly
because of the swift rise of AI workloads. This will pose some interesting challenges for the new
NABERS standard to encompass. “There are many components we can measure but AI
infrastructure is new and there are parts we have yet to determine how to measure accurately and its impact,” said Linsdell.

Is measuring PUE enough?

Power Usage Effectiveness (PUE), a metric widely used in the data centre industry, has come
under scrutiny for being one of the most misunderstood and misapplied measurements. James
Rix, a seasoned data centre professional, recently highlighted several key issues in an article on
LinkedIn [1].

Rix argues that while PUE is intended to measure the efficiency of a data centre’s power usage, it
is often misused by companies to claim environmental superiority without a full understanding of
the metric’s limitations. PUE, calculated by dividing the total energy used by a data centre by the energy used by its IT equipment, should ideally be close to 1.0. However, this figure can be easily
manipulated or misrepresented, leading to inflated claims about a facility’s efficiency.

One of the core issues Rix identifies is the variability in what different companies include in their
PUE calculations. Some might exclude certain types of energy usage, such as that used for office
spaces or external cooling, to present a more favourable PUE. This selective reporting can give a
misleading picture of a data centre’s true energy efficiency.

Rix also notes that PUE does not account for the source of the energy used, whether it is from
renewable or non-renewable sources. A data centre might have an excellent PUE, but if it relies on
coal-fired power, its environmental impact could still be significant. He calls for a more
comprehensive approach to measuring and reporting energy efficiency, one that considers the
entire energy supply chain.

So while PUE remains a useful tool, Rix cautions against relying on it as the sole indicator of a
data centre’s efficiency. He advocates for greater transparency and standardisation in how PUE is calculated and reported, to ensure that the metric reflects the true environmental impact of data
centre operations.

“The problem with PUE is certain datacentre subsystems can be excluded and thus the reporting
can be selective,” said Linsdell. “In addition the pressure on datacentres in some countries having
to deliver low PUEs drives puts much pressure on the datacentre operator to deliver like in Germany which wants a PUE of 1.5 by 2026 and 1.3 by 2030 and Singapore, which has targeted to be 1.3 within the decade.”

“If incorrect PUEs are being reported and under the true PUE it would indicate there are savings of
energy (cost and carbon come along for the ride) to be had,” he said.

White space over-provisioning

The issue of “white space” in data centres – the unused or underutilised space allocated for IT
equipment – is a significant challenge, particularly in the context of energy efficiency and
sustainability. In traditional non-AI data centres, this lack of visibility into how white space will be
utilised often forces operators to over-provision power and cooling resources as a buffer. This can
lead to substantial energy waste.

For instance, if a data centre is designed with a projected peak power requirement of 400MW, a
5% over-provisioning due to white space uncertainty could lead to an excess of 20MW. This
amount of power could be sufficient to operate an entirely new data centre hall, with its additional
revenue highlighting the current inefficiency. For example, in Australia, current NABERS ratings
focus on operational efficiency, but they may not yet fully account for the inefficiencies introduced by over-provisioned white space. The energy allocated to non-utilised or underutilised space could therefore artificially inflate a data centre’s overall energy usage, negatively impacting its NABERS rating even if the operational areas are efficient.

“These industry bodies, led by the government should be the policemen of the datacentre industry – they are totally independent,” said Linsdell. The datacentre industry has become more
significant and large consumers of energy and space and so this requires more attention especially as the power usage will impact others. Apart from the morality of wasting energy – especially when there are so many methods now to address real  efficiency – it is a travesty in the preceding time of predicted shortage that most, even very well run datacentres, have at least a few single digit percentage savings to be made.”

To overcome this, introducing specific metrics within any new NABERS framework that assess how
effectively white space is utilised could help incentivise operators to minimise waste. This could
include ratings for how well a data centre matches its power and cooling provisions to actual IT
equipment needs, rewarding those that manage white space efficiently.

Such measures could incentivise data centre operators to repurpose or optimise white space,
either by reducing the total energy allocation or by using the space for energy-efficient
applications. This would encourage a more strategic approach to capacity planning and resource management.

“Data centre operators build in buffers due to this lack of visibility and they have to because to
have an outage is near catastrophic for any datacentre operator – they would face many penalties, loss of reputation and likely loss of business as a result,” said Linsdell. “Operators will always operate on the safer side – for mech cooling – turn down set temps, maximise airflows for example.”

“The buffers waste energy and lead to costs being shared between clients and operators – or all to client as in many cases – unnecessary carbon usage and capacity loss,” he said. “For example if mechanical energy can be transferred to IT load the site becomes more efficient and profitable.”

Unfortunately, until recently it was near impossible to obtain accurate near real time data in the
white space – which in fact is the most important area in the datacentre for such diagnostic
knowledge,” said Linsdell. “I am aware of at least three platforms that have the capability of
providing near real time data as to what is happening on a rack by rack basis.”

“In addition as this data is collected over a period of time it can be used to predict events, or
identify trends, load changes – regular or irregular – build in weather changes and even, if tied into a grid health application, buy electricity at low prices, or when to use renewable energy or even use the generator battery system to assist in frequency of  grid  maintenance or surge supply,” he added.

Accurate views

Linsdell believes using more granular monitoring and visibility tools that help data centre operators better understand and manage white space and be more efficient is in everyone’s interest. “Let’s think about what we measure in the datacentre, the BMS measures room temperatures, cooling equipment, chillers, towers pumps etc and there is a structure as to what is needed and alarms when something fails,” said Linsdell. “However DCIM really depends on which one you use, often DCIM is angled toward asset management but can measure trends.”

“What is often not done well is actually what is happening at rack level,” he said. “How do they
compare to their set SLA? It is often considered because – I believe – for example in a cold aisle
contained facility the air supplied is at a uniform temperature the racks will be receiving a
consistent cooling impact – what is often not considered is the IT load equals heat generated at a
rack by rack level.”

“Sensors measure the room and often the aisle but the cooling is often set lower, just in case, and
fan speeds on max – better more airflow than not enough – and this creates inefficiencies. No one
doubts a datacentre must never fail but should it waste energy to do this?” he added.

To recapture some of this wasted energy Linsdell said the first stage for operators is always
optimising of the cooling system to balance the actual white space cooling needs of the IT load to
ensure the datacentre operators’ clients SLA is met but the minimum amount of cooling is applied
to the room/floor. “After this chillers and cooling towers can be optimised to ensure the most
efficient chilled water temperatures are in play. It is possible to be able to visualise the impact of
changing chiller temperatures using the same diagnostic tools,” he added.

Linsdell believes the balance between improving operational efficiencies and the need for
continuous expansion of data centre capacity due to growing demand can be done if the industry acts. “It is estimated Sydney needs 2GW by I think it is 2032 and if we build efficiently thats 100 to 150MW max in efficiency. It looks like a drop in the ocean as 150MW is the size of a smaller power station,” he said.

 

General Manager, ANZ & APAC at Ekkosense, Robert Linsdell will be presenting the keynote: “Why high-density AI workloads demand absolute visibility of data centre white space” at W.Media’s Sydney Cloud and Datacenter Convention 2024 next week at the Sydney International Convention Centre on 12 September 2024. As Sydney maintains its status as a leading cloud and datacenter hub in the Asia Pacific, our event will spotlight the latest advancements in digital infrastructure and their impact on IT, business, and society.

Building on the success of the 2023 convention with over 700 attendees, the 2024 edition will
feature thought leaders, industry experts, and dynamic speakers who will share insights, case
studies, and engage in lively debates. Attendees can look forward to keynote presentations, panel
discussions, tech demonstrations, and ample networking opportunities. Join us for a day of
innovation, learning, and connection in the heart of Sydney.
https://clouddatacenter.events/events/sydney-cloud-datacenter-convention-2024/

[1] https://www.linkedin.com/pulse/most-abused-metric-james-rix

[Author: Simon Dux]

 

Publish on W.Media
Author Info:
Picture of Nick Parfitt
Nick Parfitt
Share This Article
Related Posts
Other Popular Posts
South Asia News