Armada, a San Francisco-based modular data centers provider and AI platforms distributor, and Nscale, a European AI infrastructure builder, have signed a Letter of Intent (LoI) to deliver both large-scale and edge AI infrastructure for enterprise and public sector customers worldwide.
According to a press release, the collaboration will bring modular data center infrastructure, GPU compute capacity, application software, and customer support to sites worldwide. By leveraging land and power access at these locations, the companies plan to deliver AI infrastructure faster than traditional full data center builds, enabling enterprises and governments to maintain secure, compliant compute environments even where infrastructure does not currently exist.
Nscale operates supercomputer clusters globally, providing a full-stack platform across power, data centers, compute, and software. Armada delivers real-time intelligence through its Galleon modular data centers and the Armada Edge Platform (AEP). The partnership aims to combine these capabilities to offer sovereign AI solutions at scale and on the edge.
Josh Payne, Founder and CEO, Nscale, said, “There is increasing demand from enterprises and governments for operational AI, and meeting that need requires infrastructure that is scalable, distributed, and ultimately sovereign a flexible foundation for deploying advanced AI workloads wherever they need to operate, without compromising performance, security, or control.”
Dan Wright, Co-Founder and CEO, Armada, said, “As AI adoption accelerates, organizations need infrastructure that can reach beyond centralized clusters, on Earth and even beyond, partnering with Nscale allows us to extend our modular AI infrastructure into new global markets, supporting customers who require sovereign, high-performance compute.”
The joint solution uses a hub-and-spoke model as Nscale’s large-scale data centers provide foundational capacity and cost efficiency, while Armada’s turnkey deployments, including its megawatt-scale Leviathan Galleon, extend capabilities to edge locations.
The companies plan to establish a repeatable global model for AI infrastructure deployment, combining large-scale cloud services, modular compute, and distributed operations to accelerate AI adoption while maintaining security and compliance.

