Eligibility :
Perks :
The HPC Summit is a colocated, high-level technical and strategic forum alongside the Data Center Investment Summit. It brings together global and regional leaders in High-Performance Computing (HPC), AI Infrastructure, and Open Hardware Standards
Opening remarks from W.Media introducing the HPC Summit Southeast Asia 2026 – setting the tone for a day of open collaboration among open standard communities.
A visionary session highlighting how open compute standards and cross-border collaboration are driving the next wave of AI and supercomputing infrastructure across Asia.
Innovation: 48V busbars, liquid-cooling integration, and hyperscale deployment readiness for GPU clusters.
Explores the new v2 specification, emphasizing modularity, rapid deployment, and sustainability in existing 19-inch rack environments.
Outlines how China’s rack standard supports its AI Compute Network initiative, creating interoperability and supply-chain alignment across hyperscalers.
Industry experts and consortium representatives discuss the possibility of a universal design language for racks, power, and cooling systems supporting global AI and HPC growth.
OEMs and power specialists explain how 48V systems are becoming the new baseline for AI clusters, enabling higher efficiency and interoperability between open rack formats.
A hyperscaler or integrator shares practical insights from deploying an open compute-based HPC facility using liquid cooling and modular power infrastructure.
Discussion on how renewable integration, heat reuse, and green financing intersect with HPC’s high-density requirements.
Vendors and operators examine how open standards are enabling interoperable liquid-cooling ecosystems at scale.
Leading ODMs such as Wiwynn, Foxconn, and Quanta discuss manufacturing agility and ecosystem collaboration for open rack systems.
Policy leaders and research institutions explore how open architectures can underpin national HPC programs across Southeast Asia.
A technical session covering the emerging role of high-bandwidth, low-latency interconnects (InfiniBand, Ethernet, CXL) in open HPC system design.
Experts from hyperscalers, OEMs, and engineering firms envision how open, modular, and liquid-cooled data centers will evolve to serve AI workloads.
Industry leaders discuss a roadmap for global interoperability and regional working groups for AI infrastructure harmonization.