Hong Kong is moving into a new phase of relevance in the digital infrastructure landscape — not as a hyperscale volume market, but as a strategic, high‑value platform for AI, capital, and cross‑border connectivity.
This event examines how Hong Kong can finance and deliver AI infrastructure at speed and scale, while addressing the engineering challenges of supporting GPU‑intensive workloads. Industry leaders and experts will explore next‑generation data center design — including high‑density architecture, advanced cooling, and modular infrastructure — alongside strategies for upgrading Hong Kong’s existing data center facilities to support AI.
The program also highlights the evolving AI compute ecosystem in Hong Kong, from the growing role of telecom operators to emerging platforms such as NeoCloud and new enterprise deployment models.
Designed for investors, operators, and technology leaders, this event offers practical insights, strategic perspectives, and high‑value networking with the companies shaping the future of AI infrastructure in Hong Kong. Don’t miss this chance to connect with the decision‑makers shaping Northeast Asia’s digital future.
Arrival, registration and networking breakfast
Welcome by W.Media
Hall Chairman Opening Address
Opening Keynote: Hong Kong Reframed: Addressing Global Investment Perspectives in the AI Infrastructure Era
Financing the AI Build-Out: Delivering Capacity at Speed and Scale
Capital is flowing into AI infrastructure but execution remains the challenge. The ability to finance projects at scale and managing execution risk are becoming the defining constraint.
The panel will examine how regulatory complexity, power availability and financing structures define real-world delivery of new data center and AI capacity.
Clifford Chance
Pivotale AI
Designing AI Campuses in the Greater Bay Area
Regulatory Developments Across GBA and Northeast Asia
What are the recent policy and regulatory developments shaping digital infrastructure investments? We’ll cover key issues including zoning, tax and investment incentives and data residency affecting data center and AI infrastructure projects.
Evolving AI Silicon And Its Impact on Data Center Design in the Greater Bay Area
Networking Coffee Break
Retrofitting Without Regret: Engineering and Capital Strategies to Future Proof Data Centers
- The most common design missteps in power distribution, cooling integration and structural planning when upgrading legacy facilities for higher density workloads
- How hybrid cooling, selective hall conversion and power reallocation strategies can unlock additional capacity in vertical, space constrained environments
- The financial tradeoffs between incremental upgrades and major infrastructure overhaul, including return on invested capital and asset valuation impact
- Practical considerations around tenant migration, deployment timelines and operational risk when modernizing live facilities
Hybrid Cooling Strategies for Vertical Data Centers in Hong Kong
How are hybrid cooling strategies combining advanced air systems, rear door heat exchangers and direct to chip liquid cooling enabling selective high density deployment in multi storey urban facilities? What are the engineering considerations, power and thermal coordination, structural limitations and capital efficiency in retrofitting existing buildings.
Designing Modular AI Infrastructure: From Deployment to Service Delivery
This keynote explores how modular data center design—leveraging prefabricated power systems, liquid cooling, and factory-integrated AI racks—can dramatically accelerate infrastructure deployment across the Greater Bay Area.
More importantly, it addresses how software-defined platforms—including IaaS orchestration, workload scheduling, and multi-tenant management—convert raw GPU capacity into reliable, on-demand AI services.
By combining hardware modularization with software-driven operations, this session will examine how operators can achieve faster time-to-service, improved capital efficiency, and sustained performance in next-generation AI infrastructure.
Networking Lunch in the Exhibition Area
Can Hong Kong be the International Bridge to Global Connectivity and AI Growth?
- How telecom operators across Hong Kong, Shenzhen and Guangzhou are evolving their networks and data center infrastructure to support accelerating AI demand
- The commercial strategies telcos are adopting to participate in the AI compute value chain beyond traditional connectivity services
- Opportunities for collaboration between telcos, colocation operators, NeoCloud platforms and enterprise customers within the GBA ecosystem
- How the region’s telecom infrastructure can sustain AI growth while balancing resilience, regulatory considerations and capital efficiency
Case Study: Building an AI Ready Telco Core: Compute and Connectivity Strategy
This case study examines how a leading telecom operator is transforming its carrier grade infrastructure to support AI driven enterprise and platform workloads across the Greater Bay Area. It explores the integration of high density compute within telecom data center environments, including enhancements to power architecture, cooling systems and network fabric design.
The session will demonstrate how deep fibre assets, cross border connectivity and distributed edge locations create a differentiated foundation for scalable AI deployment. It will also highlight how telecom operators can evolve beyond traditional connectivity providers to become strategic enablers of AI infrastructure in a performance driven and increasingly interconnected compute ecosystem.
Enterprise AI Deployment Strategies: Evaluating Colocation, NeoCloud and On Premise Models
This panel addresses:
- The key strategic factors enterprises must assess when determining where AI workloads should be deployed
- The operational and commercial tradeoffs between colocation facilities, NeoCloud platforms and private on premise infrastructure
- The impact of hardware procurement cycles, scalability needs and upgrade flexibility on long term AI infrastructure planning
- How enterprises can build adaptable AI environments that balance performance, cost efficiency and operational control in a rapidly evolving compute landscape
NeoCloud Explained: GPU Native Platforms, the Future of AI Compute and the New Business Model Behind It
NeoCloud has emerged as a new category of GPU native platforms purpose built to deliver high density AI compute at scale. Positioned between traditional hyperscale cloud and colocation infrastructure, NeoCloud providers offer performance optimized GPU clusters through flexible, consumption driven models. This keynote explains what NeoCloud is, how it is architected, and why it represents a structural shift in the economics of AI compute. As enterprises across the Greater Bay Area accelerate AI adoption, NeoCloud is redefining how organizations access scalable compute without bearing the full capital intensity of owning and operating GPU infrastructure.
Greater Bay Area 2030: Capital, Silicon and the Future of AI Infrastructure
This closing panel brings together industry leaders and investors to distill the key insights from the day and assess what lies ahead for GBA’s data center market. What are the demand drivers, risks and returns, and which segments of the GBA ecosystem are best positioned for growth over the next 12 to 24 months?
Sundown Networking Drinks
Interested in Sponsoring?
Showcase your brand to senior decision-makers across data center investment, infrastructure and digital economy.