Why Australia’s data centre market is being reshaped from the ground up

February 9, 2026 at 8:26 AM GMT+8

For much of the past decade, Australia’s data centre market was defined by choice. Choice of location, choice of suppliers, and choice of pricing models, particularly for hyperscalers and cloud service providers. That balance has now shifted decisively, and speed has become the dominant competitive force.

According to DatacenterHawk senior insight analyst – APAC Donny Gunadi, the last five years have fundamentally altered how capacity is planned, financed and delivered in Australia. “Five years ago, the industry was still talking primarily about cloud and hyperscale demand, but the market structure was very different,” he says.

“In most regions, including Australia, the cloud service providers had the upper hand. There were multiple ready facilities in the same availability zones competing for the same opportunities, and pricing per kilowatt was still a meaningful lever,” he adds.

Back then, demand was typically absorbed by existing data halls in established facilities. Deals were comparatively straightforward, with operators building speculatively and tenants selecting from what was already available. “What has changed is the sheer strength and steepness of demand,” he says.

“We are now talking about growth measured in hundreds of megawatts, at a time when rack densities have increased dramatically. At the same time, it still takes 18 to 36 months to bring a new facility into service. That mismatch has completely changed priorities,” adds Gunadi.

Why speed now outweighs price and location

In today’s market, time-to-service has eclipsed both price and location. “With supply constrained and delivery timelines so long, any new demand prioritises partners who can deliver capacity fastest,” says Gunadi. “Pricing and location have become secondary. The real differentiators now are a combination of land, power, water and, ultimately, how quickly those resources can be turned into live capacity.”

That pressure has driven a rapid evolution in how data centre projects are structured. Traditional leasing models, where operators delivered fully fitted halls at an agreed price per kilowatt, are increasingly unworkable at scale. “Nobody is building speculatively anymore at these capacity levels,” he says. “The capital exposure is simply too high without commercial commitments in place.”

Models shaped by constraint, not preference

Instead, the market has shifted towards a range of engagement models designed to compress delivery timelines. Built-to-suit developments, powered shell arrangements, and joint venture or special purpose vehicle structures have all emerged as responses to the same constraint: speed. “What we are seeing is a transformation in deal structures,” says Gunadi. “In some cases, tenants take a powered facility and work with another partner to bring in the mechanical and electrical infrastructure needed to make it operational. The initial investment is lower, but the critical foundations – land, power, water – are already secured.”

Partnerships between landowners, developers and operators have become more common as a result. “Land and building owners are increasingly working with experienced data centre operators to achieve the same objective,” he says. “The goal is always the shortest possible time to service. These collaborations are not about innovation for its own sake; they are a response to very real bottlenecks.”

While these models offer clear advantages, Donny is careful not to present them as a universal solution. “There is no one-size-fits-all approach,” he says. “Solving complex problems at this scale requires breaking them into logical pieces and assigning ownership clearly. It’s divide and conquer. When that clarity exists, delivery timelines can genuinely be shortened. When it doesn’t, perceived speed advantages can evaporate very quickly.”

Speed comes with structural risk

Gunadi says that from an operator, customer or investor perspective, the benefits of these models are obvious. Risk can be shared more effectively across parties, subject matter expertise can be segregated, and lead times can be reduced. “Ideally, you have each party focusing on what they are genuinely good at,” he says. “If the joint venture or SPV is well defined, contract negotiation can actually be simpler, not more complex.”

The risks, however, are equally real. “These structures involve more parties and more contracts, and that inevitably introduces complexity,” he says. “If there is a mismatch in the partnership, or if responsibilities are not watertight, disputes can delay delivery and clients are the ones who suffer. Everyone wants to get into data centres right now, but not everyone has the experience. Clients tend to gravitate towards partners with proven track records and/or existing global relationships.”

This realism extends to his view on what has driven the shift towards faster engagement models in the first place. While customer demand is clearly a factor, Gunadi argues the change has been shaped just as much by external constraints. “The industry understands the bottlenecks now,” he says. “This is not purely about customer preference. It is about how the ecosystem has responded creatively to power constraints, planning approvals and construction capacity limitations.”

The hard limits of legacy infrastructure

Those constraints are also forcing a reckoning with Australia’s existing data centre stock. As AI workloads push rack densities, cooling requirements and power loads far beyond historical norms, not every facility can be upgraded to keep pace. “A lot has changed in the past five years alone,” he says. “Many so-called grandfather data centres were designed for a very different era. They were built for certain heat loads and floor loadings, and those assumptions no longer hold in an AI environment.”

While technical upgrades are sometimes possible, they are not always economically rational. “In some cases, it makes more sense to demolish and rebuild than to retrofit,” he says. “What we often see is a player entering the market by acquiring legacy facilities and then taking one of two paths. Either they upgrade the site to support AI workloads to a limited extent, while relying on adjacent land for future expansion, or they keep the legacy facility focused on enterprise and lower-density racks and build AI-ready capacity next door.”

The most common limitations in older facilities are structural rather than cosmetic. “Floor loading and cooling capacity are the big ones,” he says. “You can sometimes work around power constraints, but if the building cannot physically support the weight or thermal output of modern equipment, your options are limited.”

Land is abundant; power and coordination are not

If operators are being forced to rethink their portfolios, landowners face an equally steep learning curve. Gunadi says many still underestimate what it really means for a site to be “data centre ready”. “A data centre is not just a physical building built on a piece of land,” he says. “You need power, and not just power in principle. You need contractual certainty around megawatt capacity and an expansion path. You need water for cooling, and you need fibre connectivity, which is often treated as an afterthought.”

In Australia, power availability has become the primary limiting factor, eclipsing land size altogether. “Demand has surged, and it takes a long time to make power ready,” says Gunadi. “Sydney has been the epicentre of data centre development of Australia and is now experiencing resource constraints, which is why projects are shifting to Melbourne and other states, including regional Australia. Unlike markets such as Japan or Singapore, land itself is not the constraint here.”

For landowners serious about attracting operators or investors, early due diligence is essential. “These boxes need to be ticked before conversations even start,” he says. “Engaging consultants who specialise in grid capacity, connection timeframes and upgrade risk is critical. Without that work, confidence in the site is low, no matter how attractive it looks on paper.”

Water access and cooling strategy are also becoming gating factors as rack densities rise. “The higher the density, the more critical liquid cooling becomes, whether that is direct-to-chip or immersion,” he says. “Many facilities are now being designed with a mix of air and liquid cooling to meet different market needs.”

Connectivity, particularly for AI inferencing workloads, cannot be overlooked. “Fibre diversity and proximity to digital infrastructure are extremely important,” he adds. “Resiliency is non-negotiable. Single points of failure have to be eliminated.”

Taken together, Gunadi believes Australia is at a crossroads. “The challenges Australia faces are not unique,” he says. “Globally, the industry is waiting to see large-scale AI adoption drive higher GPU utilisation. What we are seeing now is a market that is segmenting.”

For Australia to remain competitive, he argues, a more coordinated approach is essential. “Australia has the demand, the capital and the technical expertise,” Donny says. “What it needs now is coordination. Without that, speed will continue to dominate, and only those who can navigate the constraints will win.”