Dedicated HPC SEA summit integrating AI workloads & DC infrastructure.
Secure your spot at the premier HPC event in Northeast Asia
HPC Summit NEA 2026
per delegate Early bird
CTOs, CIOs, IT Directors, System Integrators, Infrastructure Architects.
The premier destination for the architects of Northeast Asia's AI-driven computing future.
The HPC Summit Northeast Asia 2026 is the definitive convergence point for the pioneers of high-density computing. We bridge the gap between cutting-edge AI workloads and the massive data center infrastructure required to power them.
As the first regional summit dedicated to the AI-Infrastructure Intersection, we dive deep into the GPU-powered ecosystems, liquid-cooling revolutions, and lightning-fast interconnects that are setting the new standard for the industry.
Master the complexities of distributed AI training and sovereign high-density compute clusters.
Beyond Air-Cooling: Unlocking the efficiency of 100kW+ racks and liquid-cooled environments.
Interconnected Intelligence: Exploring the next frontier of optical networking and ultra-low latency fabrics.
Where HPC meets Quantum, Edge, and the future of Generative AI infrastructure.
Partner with Northeast Asia's premier HPC summit and put your brand in front of 300+ key decision makers.
Limited platinum and gold slots remaining for 2026.
Comprehensive sessions designed for industry leaders.
Opening remarks from W.Media and event leadership, setting the stage for the HPC Summit Northeast Asia 2026.
North-East Asia plays a critical role in the global AI ecosystem, from silicon fabrication and memory production to power electronics and rack manufacturing. This keynote examines why the region has become indispensable to AI infrastructure development and how its engineering, supply-chain depth, and standards leadership influence how AI systems are built and deployed worldwide.
As AI models grow larger and more complex, physical constraints at the silicon and memory level are becoming the primary bottlenecks to further scaling. This keynote explores power density, memory bandwidth, thermal limits, and interconnect challenges, highlighting how chip design and packaging decisions ripple through system architecture, cooling strategies, and data centre design.
Open rack standards promise flexibility and interoperability, but translating specifications into manufacturable, reliable products presents real engineering challenges. This session dives into how ODMs and manufacturers design open racks at scale, balancing standardisation, supply-chain resilience, cost efficiency, and global deployment requirements.
This session compares regional rack approaches, including China's Scorpio project, Open Compute designs, and traditional enterprise racks common in Japan. It examines where these philosophies align, where they diverge, and how regional priorities shape rack design, interoperability, and long-term system evolution.
In the exhibition area
While global interoperability is often seen as the goal, regional requirements, regulatory environments, and supply-chain realities complicate standardisation. This panel brings together engineers, standards bodies, and system builders to debate whether a single global standard is realistic or even desirable, and where controlled divergence may be necessary.
AI systems are redefining power delivery requirements at every level, from the board to the rack. This session explores emerging power architectures, including high-voltage distribution, conversion efficiency, reliability under extreme density, and how power design decisions directly impact system stability, cooling requirements, and operational cost.
Rather than starting with a facility-first mindset, this case study demonstrates how leading organisations design AI infrastructure as an integrated system, where compute, networking, power, and cooling are engineered together. The session highlights why traditional data centre design approaches fall short for modern HPC and AI workloads.
In the exhibition area
As AI drives unprecedented infrastructure investment, this fireside chat examines whether data centres remain a compelling asset class. The discussion connects engineering realities -- power constraints, hardware lifecycles, and cooling complexity -- with capital expectations, exploring how technical decisions influence risk, returns, and long-term valuation in North-East Asia.
Cooling has moved from a facilities concern to a core system-level design constraint. This panel explores how thermal limits at the chip, rack, and system level influence architecture choices, reliability, servicing, and manufacturability, and how open approaches are enabling new cooling strategies for AI-scale infrastructure.
Governments and research institutions across North-East Asia are investing heavily in national compute capabilities. This session examines how different countries approach HPC system design, ownership, and long-term operation, and how open architectures support sovereign AI ambitions while enabling international collaboration.
Compute performance increasingly depends on how efficiently systems communicate. This technical session explores the evolving role of high-bandwidth, low-latency interconnects -- including InfiniBand, Ethernet, and CXL -- and how networking choices shape system scalability, efficiency, and architectural flexibility.
Behind every AI factory are engineering concerns that rarely appear in investment models: component failure modes, power instability, thermal margins, supply-chain risk, and upgrade cycles. This panel surfaces the practical challenges engineers face when designing and operating AI systems at scale, and why these factors matter to long-term success.
As AI infrastructure evolves rapidly, questions around who defines standards and how are becoming increasingly complex. This closing panel brings together stakeholders from standards bodies, manufacturing, silicon, and policy to discuss how HPC standards are shaped today, and what collaborative models will be needed to guide the next phase of global AI infrastructure.
Leading organizations driving the future of HPC & AI infrastructure.
Browse and search the full list of participating organizations.
Thank you to our supporting partner
Partner with Northeast Asia�s premier HPC summit and put your brand in front of 300+ key decision makers.
Limited platinum and gold slots remaining for 2026.