Australian-based AI infrastructure developer Firmus has announced plans to build a new software layer designed to coordinate AI workloads with electricity grid conditions, as the company deepens its collaboration with Nvidia and prepares to scale its Project Southgate platform. The company said the software will integrate with Nvidia’s DSX Blueprint architecture to enable what it describes as “grid-integrated AI factories,” where compute performance is dynamically aligned with energy availability and grid constraints.
The initiative forms part of Firmus’s broader “Model-to-Grid” platform, which aims to link AI workloads, cooling systems, and power infrastructure in real time. The company said the approach builds on research dating back to 2020 and incorporates patented orchestration technologies designed to respond to the electrical and thermal characteristics of AI compute. “The AI factory of the future must be as intelligent about energy as it is about compute,” said Daniel Kearney, chief technology officer at Firmus.
“As platforms evolve, power and thermal characteristics change fundamentally. Our architecture is designed to understand those shifts and apply that intelligence … to operate infrastructure at scale while working seamlessly with grid systems,” he said.
The software layer is intended to support a new class of AI infrastructure in which large-scale GPU clusters act as active participants in energy systems rather than passive consumers of electricity.
Marc Hamilton, vice president of solutions architecture and engineering at Nvidia, said tighter integration between compute and power systems will become critical as AI infrastructure scales. “AI workloads are scaling toward gigawatts, and their relationship with power delivery systems is one of the defining engineering challenges of our era,” he said.
Project Southgate as deployment platform
Firmus said the architecture will first be deployed across its Project Southgate programme in Australia, which is being developed as a multi-site network of large-scale AI facilities. The company said operations will be coordinated with the Australian Energy Market Operator (AEMO) and the National Electricity Market (NEM), with initial deployments focused on regional areas where projects can support renewable energy zones and local economic development.
The move reflects a broader shift in AI infrastructure design, as developers seek to address constraints around power availability, grid stability, and the increasing energy intensity of AI workloads. By aligning compute demand with grid conditions, Firmus is positioning its platform as a way to improve energy efficiency and utilisation while reducing stress on electricity networks.
Linking energy and compute
The Model-to-Grid approach treats AI infrastructure as a continuously optimised system, according to Firmus, using real-time data on workload behaviour and grid dynamics to coordinate compute, cooling, and power systems. The company said this could enable higher GPU utilisation rates and lower energy consumption per unit of AI output, while allowing facilities to operate in closer alignment with energy policy and grid stability requirements.
The concept of “grid-aware” data centres is gaining traction across the industry, particularly as hyperscale and AI-driven facilities push into the hundreds of megawatts and beyond. Firmus’s approach builds on its broader strategy of tightly integrating infrastructure design, from compute hardware to energy systems, in order to optimise performance at scale.
US expansion underway
The announcement comes as Firmus begins building out its presence in North America, appointing former Amazon Web Services and Telstra executive Trent Viengsone as vice president for the region. Viengsone will lead customer engagement in the US market, with a focus on connecting North American AI companies to capacity within Project Southgate.
Firmus said the expansion is intended to support growing global demand for GPU compute and position Australia as a destination for large-scale AI workloads. The company has framed its strategy around enabling Australia to participate in what it describes as a global “AI token” market, linking international demand for compute with domestic infrastructure powered by local energy resources.
Toward software-defined AI infrastructure
The collaboration with Nvidia highlights the increasing role of software in managing the interaction between compute infrastructure and energy systems, according to the company. As AI workloads continue to scale, the ability to dynamically orchestrate power, cooling, and compute resources is emerging as a key area of differentiation for infrastructure providers.
Firmus said its work with Nvidia aims to establish an operational software standard for DSX Blueprint-based AI factories, with potential applications beyond Australia as similar energy and infrastructure constraints emerge in other markets. The development also underscores the growing convergence between the data centre and energy sectors, as operators look to build infrastructure that can respond to both compute demand and grid conditions in real time.