Join 300+ industry leaders at Southeast Asia's premier HPC event
The HPC Summit Southeast Asia 2026 is the definitive convergence point for the pioneers of high-density computing. We bridge the gap between cutting-edge AI workloads and the massive data center infrastructure required to power them.
As the first regional summit dedicated to the AI-Infrastructure Intersection, we dive deep into the GPU-powered ecosystems, liquid-cooling revolutions, and lightning-fast interconnects that are setting the new standard for the industry.
Comprehensive sessions designed for industry leaders.
Opening remarks from W.Media introducing the HPC Summit Southeast Asia 2026 – setting the tone for a day of open collaboration among open standard communities.
Explores the new v2 specification, emphasizing modularity, rapid deployment, and sustainability in existing 19-inch rack environments. Speaker: Maury G, Global GTM Strategic Advisory, Pivotale Ai
A visionary session highlighting how open compute standards and cross-border collaboration are driving the next wave of AI and supercomputing infrastructure across Asia.
Innovation: 48V busbars, liquid-cooling integration, and hyperscale deployment readiness for GPU clusters.
Outlines how China's rack standard supports its AI Compute Network initiative, creating interoperability and supply-chain alignment across hyperscalers.
In the exhibition area
Industry experts and consortium representatives discuss the possibility of a universal design language for racks, power, and cooling systems supporting global AI and HPC growth.
OEMs and power specialists explain how 48V systems are becoming the new baseline for AI clusters, enabling higher efficiency and interoperability between open rack formats.
A hyperscaler or integrator shares practical insights from deploying an open compute-based HPC facility using liquid cooling and modular power infrastructure.
In the exhibition area
Discussion on how renewable integration, heat reuse, and green financing intersect with HPC's high-density requirements.
Vendors and operators examine how open standards are enabling interoperable liquid-cooling ecosystems at scale.
Policy leaders and research institutions explore how open architectures can underpin national HPC programs across Southeast Asia.
A technical session covering the emerging role of high-bandwidth, low-latency interconnects (InfiniBand, Ethernet, CXL) in open HPC system design.
Experts from hyperscalers, OEMs, and engineering firms envision how open, modular, and liquid-cooled data centers will evolve to serve AI workloads. Speaker: Joel Morris, Inference Compute Planning Lead, OpenAI.
Industry leaders discuss a roadmap for global interoperability and regional working groups for AI infrastructure harmonization.