Digital infrastructure in Australia is facing a number of key questions, not simply as a result of accelerating demand trends but also due to technologies that are challenging the established means of meeting demand. One of the key panels at the Melbourne Cloud & Datacenter Convention 2025 debated that while the growth of AI learning and delivery is key to future demand, what role can AI play as an operational and planning tool for meeting future demand?
Joining the panel were Chris Clarkson of Double C Consultants, Simon McFadden the industry director, digital infrastructure at Aurecon, Peter Blunt the chief commercial officer of data centre company Polaris; and Karl Kloppenborg the CTO of AI Factory builder Reset Data. The panel was moderated by Nick Parfitt of W.Media.
W.Media: How long would you say is now the useful life of a data centre?
Simon McFadden: Different parts of a data centre have a different life. So if you take the building itself, maybe 50 years. But if you take the power and cooling systems, some of those systems might be more like 20-25 years. But then if you look at the technology refresh cycles again, they’re much less than that.
Peter Blunt: With any facility, it’s a constant process of renewal, because our clients’ needs are continuously evolving. We’re seeing rack densities continue to increase at numbers that were unheard of just a few years ago. And for a facility to remain relevant in the market, the facility itself is a continuous process of evolution.
Karl Kloppenborg: At [Nvidia GTC] they were showing a pathway towards one megawatt rack…you’re going to see specialised equipment becoming the mainstream in terms of bus bars, air conditioning units, CDUs and so these things are going to be changing at a very rapid pace, versus, the typical refresh cycle of, say, servers that were previously what changed often.
Chris Clarkson: I think there’s an interesting economic equation to be done where retrofitting existing data centres continuing to evolve them, versus the economics of literally building brand new…you might be better off bulldozing than trying to retrofit what is already there. Schneider this morning talked about 600 kilowatts and I was chatting to Peter beforehand, where we went from two kilowatt racks to five kilowatt racks. We’re currently talking about 132 kilowatt racks. That’s a 10x increase to be in a megawatt rack, and that’s two years away.
Karl Kloppenborg: With direct liquid chip cooling, you’re looking at megawatt racks. Now what’s going to really change in the data centre landscape is your tech people who typically were administrating servers, networks and whatnot, now getting specialised qualifications for high voltage electricity, HVAC, gas and propellant certifications.
Peter Blunt: I think that’s something that will differentiate the data centre operators is those that can actually provide that additional expertise to clients. You know, what we tend to find is that the clients are on a journey into the unknown as well. We have clients coming to us embarking on liquid cooling and they don’t have the knowledge and the skill sets in that space. They’re turning to us to provide that expertise and that surety that we actually know how to how to install and operate that equipment.
W.Media: Are there any other key factors that, aside from those you mentioned, that you believe play a role in in the life of a data centre?
Chris Clarkson: Let’s think about an existing 20MW facility. Now, we might currently think of 20MW as large and it might have 100 racks in it.
Karl Kloppenborg: …It’ll be 20 racks at a megawatt.
Chris Clarkson: Yeah. Nvidia’s Jensen Huang said your ability to source power dictates your ability to generate revenue. I was chatting to somebody out in the in the exhibition area earlier; they were saying “we are building a solar farm and putting a data centre adjacent to it.” So the ability to generate power, or access power, is one of the key factors.
Karl Kloppenborg: If you want to know where the densification and power data centre journey is going to go, follow the national transmission lines, because generative capacity is going to be most sought from there. It’s probably better to demolish and build from scratch, or in this case, follow the transmission line and build directly adjacent to it, combined with renewables…we’re going to see data centres much more rural than what they currently are at the moment.
Peter Blunt: Is the next logical step for data centres, effectively, to be off grid? With concerns of the load that it’s putting on the grid, are we actually better off to be building facilities that are running entirely on renewable, entirely separate from the grid?
Karl Kloppenborg: Well, what about Westinghouse small modular reactors? At what point are we going to start doing that?
Simon McFadden: One of the things I’ve been reflecting on lately is that there’s a difference between a data centre being constructed, fitted out, leased, and then the actual power demand being used in the data centre. We’re seeing, in some cases, a big gap between how much is leased and how much is actually being used. And so it’ll be interesting to see that as technology refresh rates happen, whether some of the some of the data centres that are leased but not fully fitted out and being used, whether that refresh rate changes things so that they actually have to be refreshed again before they’re fully used. The other thing about power, one of the things I’m seeing in the market is there are so many renewable project developers looking for off-take in the hundreds of megawatts. And if you look at the demand that we’re seeing in the market, most of it still is in cities close to populations in availability zones for the hyperscalers. It’ll be really interesting to see….do we see large scale data centres move out of the cities close to large scale renewable projects.
Karl Kloppenborg: As I said before, following the transmission grid. Reset Data is focused on building AI factories. AI factories are not latency sensitive. They require massive amounts of power. And while we talk about this densification that’s going on these one megawatt racks, this isn’t the case for standard computing which is on a different trajectory. There’s going to be plenty of standard tin and iron and data centres that are still going to be very space hungry places as well as power. You’re going to see a specialisation and a siloing. You’re going to see three data centres, your generalist data centre, your quantum data centres, your AI data centres, and they’re all going to be geared to supply a very specific market requirement.
W.Media: Would the panel agree that what we’ve traditionally referred to as a ‘data centre’ is going to become more segmented, more differentiated, to becoming almost a number of different animals?
Peter Blunt: I think now the industry is large enough that we can now start to see that segmentation. Previously, the industry tended to be one size fits all. One of the really big challenges we face is that as you build additional redundancy and resiliency into a design, it’s quite often at the expense of the efficiency of the site. Then you start to segment the industry…perhaps if you’ve got an AI learning factory that can actually run with a lower level of resiliency than what a bank would require, for instance.
Karl Kloppenborg: Your generalist data centre is always going to be looking for five nines and above, and AI factories will be able to pause and resume and work in a very different way than a traditional data centre.
Chris Clarkson: I was most recently working for one of the chip makers and if you go and get the data that OpenAI has been discovering when they’re building clusters of literally 10s to hundreds of 1000s of GPUs, is that the GPU failure rate is quite high. So when you’re doing training, it’s very much like traditional high performance computing and the whole concept of check pointing comes into play, because they just assume failure. That tends to reinforce the concept that there will be a segmentation of the market. Because think of the difference. If you’re a state government running a health system, your clinical administration system cannot go down. If you’re building a large language model and it takes two and a half weeks instead of two weeks, does it matter? No, so it does beg the question as to whether an AI factory might be a different genre of data centre to what we traditionally think of a data centre…accounting isn’t about to go away; a clinical administration system for a hospital isn’t about to go away…all of those traditional IT workloads will still exist and will need a traditional data centre.
Karl Kloppenborg: Not only are they going to still exist, they’re going to prosper. There’s going to be much more of it still. A GPU does not work well as a CPU, and your relational databases don’t work well on a GPU. So it’s not going away anytime soon. There’s no threat to that.
W.Media: AI has been mentioned very much as a source of demand, and it has been mentioned here already in terms of a means of managing and operating data centres. Where do you see that going?
Simon McFadden: The SLAs that qualification providers have for uptime, and data centres are pretty much 100%…and it’s possible that using AI in data centres on cooling systems can enable a more optimised data centre, but achieving the SLA is the most important thing in the short term. So before bringing AI into the operation of a data centre, there needs to be a very sensible risk tolerance approach to it.
Chris Clarkson: One of the very early use cases that was often touted for AI was predictive maintenance. I remember going to a presentation from the CTO of a major resources company. They had 400,000 sensors on an oil production platform, an astronomical number. That’s a huge amount of data. You feed that huge amount of data into an AI and you can do all sorts of wonderful predictive maintenance exercises with the raw data that’s coming off the platform that’s no different than a data centre. You can use AI to assist manage a traditional data centre or an AI factory. Predictive maintenance has long been one of the shining lights of AI use cases.
Karl Kloppenborg: So AI/ML have become very interchangeable at this point. Data centres have already, for years, implemented certain software packages and sensor packs that allow ML to do alerting, predictive mean failure and other stuff – digital twin as a way of observing the data centre in the NOC. So there’s applications where you start to delve into Omniverse digital twin with Nvidia and their AI capabilities. But in terms of the ML side of things, it’s been around for a while, and there’s a, really – I’m not going to name the vendor – but they interfaced with all of the fans in their data hall to move air flow based on certain parameters, which found massive efficiency gains. And they did this at scale. They also did this with their very first versions of AI, moving right along from our ML and actually implementing AI with that.
Peter Blunt: And I think that’s one of the things we’ve seen just recently, is the affordability of sensors is what’s really enabling that to happen.
Chris Clarkson: When you’ve invested hundreds of millions of dollars in an oil platform, an AUD 20 sensor is not really a big deal, right? But in a data centre where you’re possibly only in the 10s of millions, okay, it’s a different equation. If you’re going to deploy a large number of sensors, and to absolutely reinforce your point about how IoT has generally driven down the cost of sensors, this makes doing this in a data centre setting so much easier.
W.Media: How do you think sustainability will be applied to these future ready data centres, cloud and networks, as well? How is sustainability going to be relevant or not?
Simon McFadden: I think it will be relevant. One large amount of demand for data centres at the moment is coming from large hyperscalers which have very strong sustainability agendas and targets. Data centres use a lot of energy and so there’s a general aspiration for most data centre operators to have renewable energy as part of the mix. That renewable energy isn’t time matched. So it’s not “one kilowatt hour generated is one kilowatt hour used”. It’s often generated and used at different times. But we’re seeing a number of large, hyperscalers investing in renewable energy
projects in an additive way to create more renewable energy generation. And we’re also seeing probably at a smaller scale, but we’re seeing a lot more talk about the circular economy. And of course, we need to mention water. As we use more and more liquid cooling, more water has been used in data centres, sometimes at very high volumes. So getting that water that’s available, not displacing water that’s been used for drinking water…using water that is treated but not at drinking water potable level, are all the things that we’re seeing at the moment.
Peter Blunt: The public are now becoming more aware and more understanding of the sustainability issues. And one of the things that you know, I strongly disagree with, is that we have organisations that go out and just sign a PPA and therefore claim that they’re net zero or they’re running on renewable energy. There’s a very big difference between energy, which is generated six hours a day in the middle of the Northern Territory, and claiming that you’re using it in Sydney’s CBD. That really doesn’t stack up. I think we’re going to see more projects with direct on site, generation and complete generation systems that are actually delivering that power 24/7, not just for the six hours a day that the sun shines.
Karl Kloppenborg: A race to the bottom of the PUE is what data centres are focusing on. In Australia, water’s being the main one. About 1% of global power is now in the data centre space, and people are starting to pick up on it. Shipping accounts for 8% of global energy requirements, and people are already well and truly aware of it. People are going to start to scrutinise more, the role of the data centre in general society. Most people think of the internet is this ephemeral thing that just gets delivered between their from their ISP. They don’t really understand the massive amount of infrastructure to back it. And so as it gets more into the spotlight, we as operators are going to need to consider, how do we show that we are doing the best we can to be as efficient as possible? I think that we’re going to see a hybrid of utilities, and we’re going to see like a utilities hub, where you have generative loads, you have sewage treatment or water treatment facilities, data centres taking load and combining that into one kind of campus of sustainability.
W.Media: Are data centres becoming more ‘open’ instead of being a building set out in a secure location and will edge services accelerate this?
Karl Kloppenborg: 20-30, years ago, your data centre was hidden away. I suggest you go see NextDC’s, latest facility, M3 to see how close the data centre now is to the home.
Chris Clarkson: Go to Adelaide A1. It’s on an old volleyball court in the Adelaide central business district!
Simon McFadden: If you think about digital infrastructure in its broader sense. So not just large data centres, but you’re thinking about everything from the IoT sensors, wireless networks, fibre networks, all the AI we’re talking about, and data centres and edge data centres, and you cast forward 10-20, years…I really think there’ll be a lot more compute closer to humans and devices that are using it. Longer term, there will be very large data centres, but there will also be a whole cascading sort of spectrum of sizes and operations that are much, much closer…I do think there’ll be AI operations running locally, next to people, in devices, in robots, etc. So it’s pretty exciting.
Chris Clarkson: If you think about the AI use case specifically, we all hear about training large language models, and we all get eye watering figures about how much it costs in terms of power and the number of GPUs you use. We are going to reach an inflection point, because we’re building out at the moment. And the killer use case is not going to be training. It’s going to be inferencing. It’s going to be running the AI that can be the edge, and that has an entirely different power profile to the training.
Karl Kloppenborg: Your training is about 10% of the AI journey. 90% of it is in the inferencing. We speak about AI factories. These are training facilities. The inferencing, I think, is still going to be in your NextDCs, your Equinix, your Polaris and those data centres, because one needs to have very limited connectivity and availability. One needs to be hyper-connected to make it useful, close to the data and close to the user.
Peter Blunt: We’ve already seen some announcements…I think was about Microsoft which put out an announcement that they’re actually slowing back the build out of their data centres for training, that you know that that realisation is already kicking in…that there’s only a limited amount of training that’s going to be done. I also wonder if the inferencing equipment will end up doing more of the training as time goes on.
Karl Kloppenborg: I think as the chip sets develop, you’ll see the previous generation of chipsets take up the inferencing arm of it and that’s where you’ll see the longevity in the GPU.
[Edited by Simon Dux]
Sydney Cloud & Datacenter Convention 2025
Join us at the Sydney Cloud & Datacenter Convention 2025: “Cloud & Datacenter in Transition” taking place on 21 August at the Sydney International Convention Centre. The 2025 Convention will focus on how digital infrastructure is adapting to the increasing volume and the changing profile and requirements of demand. As digital transformation accelerates in Australia, so the dependence on Sydney grows as the country’s key digital hub. Sydney accounts for around 65% of Australia’s operational IT capacity and its rate of growth in 2024 was the highest among the key hubs in Australia, according to the Property Council.
Visit: https://clouddatacenter.events/events/sydney-cloud-datacenter-convention-2025/