Malaysia’s transformation into an AI nation gets a shot in the arm as the country is set to host Southeast Asia’s largest AI data center campus in Johor by the end of this year. Currently being built by Singapore-based data center platform, Racks Central, the 510 MW campus boasts 100 per cent high-end GPU offerings – with Nvidia Blackwell, Vera Rubin and Rubin Ultra being deployed at each successive phase of the RM26.6 billion (US$6.73 billion) project.
w.media’s SEA editor, Jan Yong, sat down with Bobby Wee, CEO of Racks Central in Kuala Lumpur last week for a chat.
Q: Tell us a bit more about this massive project that’s putting Johor on the map as an AI hub in Southeast Asia.
Wee: We are rushing to build the first AI factory campus in Southeast Asia. We believe that if we execute it right, the rest will be a natural upscaling.
Altogether there will be four facilities – RCJM1, RCJM2A & 2B and RCJM3. The biggest would be RCJM 2A & 2B. Together, their standalone capacity has already reached 300 MW out of 510 MW for the entire campus.
The difference with an AI data center is that the latter can’t expand contiguously so, if it’s offering Blackwell, it can’t upgrade to Vera Rubin within the same building itself.
But an AI campus can, for example, our first building in our Johor campus is meant only for Blackwell, while the second and third buildings are meant for Vera Rubin and Rubin Ultra respectively. This arrangement gives the customer seamless upward progression in terms of their chip set model. They can start with Blackwell, then progress to Vera Rubin and later, Rubin Ultra. They enjoy convenience, lower cost and more predictability which allows them to expand easily.
The first phase will be ready for service by the end of this year. A lot of activities are underway already with a lot of heavy lifting right now. We are targeting to deliver the first super cluster to the customer by the end of Q4 this year.
We don’t stop there; every year, there will be multiple superclusters that we are delivering in Johor. So, from 2026, 2027, all the way to 2028, there would be in multiples of high density compute that we plan for; not just Blackwell but for Vera Rubin and Rubin Ultra. The assets are already in place, and we are now doing the design work for our Phase 2 to be completed in 2027.
Q: Will you be transitioning to 100 per cent renewable energy?
Wee: Yes, coming from Singapore where sustainability is prioritised, we naturally import these ideas into our design. The Singapore’s DC-CFA2 for instance, mandates 50% renewable energy usage.
We have signed several Memorandums of Understanding to procure renewables primarily for solar as it’s still the most practical in terms of supply, price and proven availability. But we are open for not just hydro but biomass.
Q: I understand that for Singapore, you’re also planning for another data center?
Wee: Yes, we have submitted our application for the Call for Application (DC-CFA2) which would be an AI inference gateway data center and we want to use that asset (if successful) to be the gateway for inferencing with the rest of our assets in SIJORI.
Q: How do you plan to deal with the water situation at your Johor campus?
Wee: Our data center is situated very near to Iskandar Reservoir so our access to water is fairly addressable. We have already signed with Johor Special Water (JSW) – so the supply for water to our RCJM 1 and RCJM 2A is already catered for.

Q: There are now several other new GPUs in the market. How will this impact the sole use of Nvidia chips in your data center?
Wee: The key thing is not just the chipset but also the GPU programming system. CUDA, a software platform created by Nvidia has become like an operating system for GPU programming. There’s CUDA for every sector, for example in medical transportation, manufacturing, publishing, advertising, and new finance. It becomes very convenient to adopt CUDA since the Nvidia’s platform has been widely entrenched into the AI ecosystem. Also, I think the rest of the chipsets are no way near Nvidia’s stature.
Q: Do you have any plans to use modular and prefab systems for your data centers?
No, we will not do modular, but we will definitely design a hybrid model. Speed is essential but our data centers are designed to be futureproof for a newer set of chips. So, the core and shell will be traditionally built.
Q: Your AI campus is located right inside a huge industrial area, right?
Yes, and we are in Johor for the long-haul so we are investing our time and participation with the local communities.
Q: How has the Middle East conflict affected data centers in Southeast Asia?
Despite some negative impact, the war has surprisingly resulted in some positives for this region. Firstly, there is an increase in capacity uptake because of the shortfall in Gulf Cooperation Council (GCC) countries and secondly, there is an increase in liquidity activity due to the liquidity interruption in GCC.
There is a sense of urgency whereby investors now want to close the deal as soon as possible unlike previously where they take time to discuss terms. The funds have a deadline for disbursement. If it’s not dispensed in the GCC, they would have to offload it somewhere else during its lifespan, otherwise the interest opportunity cost is affected. The speed of offload is a great benefit to us.
And they don’t invest in just a single country but on a regional platform. In Southeast Asia, they are coming for Singapore, Malaysia and Indonesia.
What we see is that capacity meant for the Middle East is now moving to India, Northern Europe and Southeast Asia.
Notes:
# 1: NVIDIA Vera Rubin is a next-generation AI computing platform scheduled to launch in 2H 2026 as the successor to Blackwell, designed for Agentic AI with 10x higher performance-per-watt. It features the custom Rubin GPU, Vera CPU, HBM4 memory, and 6th-gen NVLink, delivering 3.6 exaflops of compute and 35x more throughput per megawatt.
# 2: NVIDIA’s Rubin Ultra is a next-generation AI GPU platform slated for release in the second half of 2027, featuring 1 TB of HBM4E memory and aiming for massive performance gains over Blackwell. Designed for rack-scale AI, it is expected to support advanced agentic AI, long-context reasoning, and large-scale training with significantly higher efficiency.

