At the Melbourne Cloud & Datacenter Convention 2026, Mark Deguara, general manager, data centres, secure power division at Schneider Electric, set out a clear message for operators: the shift to 800V DC is not simply about enabling higher rack densities – it represents a fundamental rethink of data centre power architecture.
“The one megawatt rack; everyone’s heard about it,” Deguara said. “People are talking about it. But the real question is: what does 800 volts DC actually mean for the data centre?” He added that while the industry’s attention has focused on the rise of high-density AI racks, the implications extend far beyond the rack itself – into electrical distribution, grid interaction, and operational readiness.
Driven by physics, not preference
The move towards 800V DC is not a design trend, but a consequence of rapidly increasing rack densities driven by AI workloads. “We’re already seeing 150kW racks deployed today,” Deguara said. “Then 250kW. And by next year, we’re talking about 600kW and one megawatt racks.”
Beyond roughly 400kW per rack, traditional AC and low-voltage DC architectures begin to hit practical limits – a point reinforced in Schneider Electric’s White Paper 213, which identifies this threshold as the tipping point for new power architectures. Deguara explained the physics: “If we double the voltage from 400 to 800 volts, we halve the current. If we halve the current, we reduce the copper by a quarter.”
This has two immediate benefits: less cabling congestion into the rack; and more usable space for GPUs and compute. “The two main reasons to get to 800 volts DC is to remove power conversion out of the rack and reduce the amount of copper going to that rack,” he said. “Which means you can put more GPUs in – and ultimately drive more tokens from the same footprint.”
Crucially, this shift is not optional. As outlined in Schneider’s research, both AC and 48V DC architectures face structural limitations at higher densities. AC-based designs suffer from congestion caused by PDUs, cabling, and cooling infrastructure inside the rack, while 48V DC systems are constrained by voltage limits that restrict the amount of power that can be delivered efficiently. At the same time, power shelves and batteries begin to compete directly with IT equipment for space, further limiting scalability .
From sidecars to system architecture
In the near term, Deguara expects the industry to adopt what Schneider describes as “sidecar” architectures – external power units delivering 800V DC at the rack level. “What we’ll see first is an 800 volt DC sidecar next to a 600 kilowatt or one megawatt rack,” he said. “That allows you to keep your traditional AC infrastructure upstream.”
This approach mirrors the “rack-level conversion” model identified as the most practical near-term solution in Schneider’s research , offering a lower-risk path that avoids redesigning the entire facility.
However, Deguara stressed that this is only a transitional phase. “When we start to look at gigawatt-scale data centres, white space becomes extremely valuable,” he said. “You can’t have a sidecar next to every rack – you’d double your footprint.”
The next evolution will involve moving power conversion out of the white space and into the grey space, potentially at multi-megawatt block levels. “We’ll start to see decentralised architectures – 3MW, 5MW, even 10MW power blocks,” he said. “And eventually, things like solid-state transformers and integrated rectification at scale.”
The implication is clear: 800V DC is not a single design choice, but a progression of architectures that will evolve alongside AI demand. At the same time, early deployments at the rack level offer an important operational advantage – limiting the impact of faults to a single rack rather than exposing entire halls or facilities to wider failure domains.
Grid interaction becomes critical
One of the most significant shifts highlighted in by Deguara is the growing interaction between data centres and the grid, particularly at hyperscale and AI factory levels. “If I’ve got a one gigawatt data centre and my GPUs stop running, the grid sees one gigawatt of load drop instantly,” he said. “That can destabilise the grid.”
This is not theoretical. As data centres scale from tens of megawatts to hundreds, and increasingly, gigawatt campuses, their behaviour begins to directly influence grid stability. “What we’re seeing now is grid operators introducing new requirements,” Deguara said. “Because large-scale data centres are no longer passive loads – they actively impact the grid.”
This has several implications: fault ride-through requirements to prevent sudden disconnection; ramp rate controls to manage load changes; and energy storage integration to buffer fluctuations. He added that even AI workloads themselves introduce new challenges. “AI smoothing is becoming a real issue,” Deguara said. “You can see fluctuations from 120% down to 20% of load over short periods.”
Schneider’s analysis reinforces this, noting that AI workloads can produce dynamic load profiles ranging from approximately 40% to 150% of nominal load, driven by synchronised processing cycles across GPUs .
These rapid fluctuations require a coordinated response. “You need a two-tier energy storage approach,” he said. “One to handle the high-frequency spikes, and another for the deeper cycles.”
Without this, the consequences extend beyond the data centre itself. At large scale, sudden load changes can introduce instability into the grid, reinforcing the need for tighter integration between data centre design and energy systems.
Not one architecture but a system rethink
Deguara emphasised that 800V DC cannot be treated as a standalone technology decision. “It’s not one size fits all,” he said. “Every data centre will deploy this differently – hyperscalers, colocation providers, neo-cloud players – they’ll all take different approaches.”
Schneider’s framework reinforces this, for example, highlighting that 800V DC is not a single architecture but a range of options shaped by factors including conversion location, grounding strategy, and energy storage placement . Crucially, most facilities will remain hybrid environments.
“You’re still going to have AC loads – chillers, mechanical systems,” Deguara said. “So you’re going to have a mix of AC and DC in the same data centre.” He added that this creates additional complexity in system design, protection, and operations.
Industry readiness remains a challenge
Despite the momentum behind 800V DC, Deguara was clear that the industry is not yet fully prepared for large-scale deployment. “DC distribution is available today,” he said. “But is it available at mass scale like AC? No.”
Key challenges include: limited standardisation; supply chain immaturity; and lack of operational expertise. “One of the biggest challenges is protection,” he said. “DC doesn’t have a zero crossing like AC, so interruption is more complex.”
“There’s also a skills gap,” Deguara added. “Are people trained to operate 800 volt DC systems? That’s something the industry needs to address.” Schneider’s research echoes this concern, noting that workforce readiness, supply chain maturity, and operational procedures are as critical as the underlying technology itself .
A shift already underway
While many of these challenges remain unresolved, Deguara’s message was that the transition is already underway – driven by the demands of AI rather than a deliberate industry choice. “This isn’t about whether we adopt 800 volts DC,” he said. “It’s about how we adopt it.”
For operators and engineers, the focus must now shift from individual components to system-level design – integrating power, cooling, energy storage, and grid interaction into a cohesive architecture.
“It’s a holistic discussion,” Deguara said. “Where are we today, and where are we going?” As AI workloads continue to push infrastructure beyond traditional limits, the question is no longer whether the technology exists, he adds, but whether the industry can deploy it at scale.