Cloud & Datacenters Magazine vol. #10 | Data Center Design Gets Adaptive

Page 1


Data Center Design Gets Adaptive

Adapting data center designs

The quiet prefabrication revolution

Special East Asia

Meta’s tent data centers

halls

China’s building boom amid empty

The prefabrication revolution in data centers

Designing adaptive data centers for India’s heartland

The tent data centers powering the AI race

Factory-built, site-assembled. The new speed of data center deployment

41 ‘China has caught up with AI’ Building data centers when silicon moves faster than steel Gearing up to fete digital infrastructure’s aces

How smart contracts can protect data center development from market disruption

Building tomorrow's digital infrastructure today

48 Join us in Mumbai for CDC and South Asia Awards

China's data center paradox: Building boom amid empty halls

Beneath the waves: China’s bold bet on underwater data centers

Why Southeast Asia's data centers need earthquake protection now Where AI workloads meet sustainable design

39 Here’s what transpired at KRCDC 2025!

From the Editor ’s Desk

our data centers: “Adapt or perish, now as ever, is nature’s inexorable imperative.”

Bwhat does “adapt” even mean? Is it just

to supply chain disruptions due to geopolitical

operators across the world are innovating to meet unprecedented challenges. Also included is a special Nor theast Asia supplement that takes a closer look at

as we enjoyed putting it together.

Deborah Grey

w.media

Meet the team

Deborah Grey Jan Young SEA Editor Simon Dux
Paul Mah

China to set up cloud marketplace to resell excess compute power

The Chinese government has recently decided to set up a national cloud and compute marketplace to resell underutilized capacity as well as implement selective approvals of artificial intelligence data centers.

In the past three years, thousands of data centers have mushroomed in the country, financed by local governments following the launch in 2022 of an ambitious infrastructure project called “Eastern Data, Western Computing”. Data centers would be built in the western regions where energy costs are cheaper in order to feed the huge energy demands from the eastern megacities.

While the state planner, the National Development and Reform Commission (NDRC) conducts a nationwide assessment of the sector, the Ministry of Industry and Information Technology (MIIT) is jointly working with China’s three state telecom companies on how to create a marketplace platform to sell the excess computing power. They are looking at how to connect the data centers in a network

to create the state-run cloud-based platform.

“Everything will be handed over to our cloud to perform unified organisation, orchestration, and scheduling capabilities,” Chen Yili, deputy chief engineer at the China Academy of Information and Communications Technology, a think tank affiliated to the industry ministry, reportedly said at a conference in Beijing in June. China is targeting a 2028 rollout of a standardised interconnection of public computing power nationwide, according to Chen without specifying details.

However, some analysts were sceptical given that the plan involved huge technological challenges. This is confirmed by both industry sources and Chinese policy advisers who acknowledged that the technology to

transfer computing power from data centers to users in real time is still underdeveloped.

While the ‘Eastern Data, Western Computing’ project targets a maximum latency of 20 milliseconds by 2025 which is necessary for real-time applications such as high-frequency trading and financial services, this has not been achieved yet especially by data centers in the western regions. A unified cloud service is also hampered by data centers using different chips such as Nvidia and Huawei’s Ascend, which makes it difficult to integrate them into different hardware and software architectures.

Chen, however, was optimistic that this is achievable, saying, “Users do not need to worry about what chips are at the bottom layer; they just need to specify their requirements, such as the amount of computing power and network capacity needed.”

China’s ambitions for AI supremacy was what kickstarted the ‘Eastern Data, Western Computing’ project. So far, at least 7,000 data centers have been registered up to August, according to government data. Last year alone saw a 10-fold increase in state investment over a one-year period, with 24.7 billion yuan (US$ 3.4 billion) spent, compared to over 2.4 billion yuan in 2023. Up to August this year, about 12.4 billion yuan (half of 2024’s total) has been invested, mostly in Xinjiang.

Even so, the rush of building across the country has created an unprecedented oversupply evidenced by over 100 cancellations in the past 18 months mainly driven by growing fears among local governments which had financed the building, that they might not see any profit.

“The idea of building data centers in remote western provinces lacks economic justification in the first place,” said Charlie Chai, an analyst with 86Research, adding lower operating costs had to be viewed against degradation in performance and accessibility.

The NDRC has already set in motion rules to regulate the industry. For example, new data centers since March 20 have to comply with more conditions such as providing a power purchase agreement and a minimum utilisation ratio. Local governments are also banned from participating in smallsized data center projects.

Shanghai, China

SK Telecom launches Sovereign AI infrastructure in South Korea

SK Telecom has announced the launch of its new sovereign AI Infrastructure, providing GPU-as-aService (GPUaaS) based on the latest NVIDIA Blackwell GPUs. It is expected to contribute significantly to nationwide AI infrastructure expansion and the growth of the South Korean AI industry.

The newly launched platform features one of Korea’s largest GPU clusters, consisting of over 1,000 NVIDIA BlackwellGPUs integrated into a single cluster. In particular, this Haein Cluster has secured a role in the Ministry of Science and ICT’s program for enhancing the foundation of AI computing resource utilization as part of “Proprietary AI Foundation Model” Project. It will actively contribute to the development of national AI foundation models with global competitiveness.

The AI Foundation Model program aims to strengthen Korea’s AI competitiveness on the global stage and advance the national AI ecosystem. Through this program, SK Telecom plans to develop its Gasan AI data center (AIDC) into a core infrastructure hub for the growth of Korea’s AI industry.

AirTrunk secures US$ 1.75 billion loan; largest ever loan in Singapore

Blackstone-owned data center firm AirTrunk has secured a landmark S$ 2.25 billion (US$ 1.75 billion), Singapore’s largest ever loan and green loan for a data center, to develop a new 70-megawatt (MW) hyperscale data centre, AirTrunk SGP2, in Singapore.

The financing structure entails a green loan with an option to convert into a sustainability linked loan (SLL) at a later stage.

Leading the consortium financing the loan are Crédit Agricole CIB, DBS Bank and ING Bank in partnership with 23 other local and international financial institutions.

“This landmark transaction strengthens AirTrunk’s leadership in sustainable finance and reflects

PDG secures ~US$ 160 million Green Loan for Mumbai DC campus

Princeton Digital Group (PDG), a prominent data center provider with presence across APAC, has secured ~US$ 160 million in green loans for its flagship MU1 data center campus in Navi Mumbai, India. This brings PDG’s cumulative green loan commitments to US$ 728 million, another step towards delivering on its commitment to sustainable digital infrastructure development across the region.

Located within a 50-acre IT/ITES park in Airoli, Navi Mumbai, MU1 aims to meet the high-density demands of cloud and AI workloads. The first two buildings with 50 MW capacity are operational and the first phase of the

100 MW expansion is currently under construction. Readers would recall that in late 2024, PDG had announced the expansion of MU1 to a 150 MW campus and a 72 MW data center in Chennai driving a new US$ 1 billion investment program in India.

MU1 is Mumbai’s first data center to achieve Indian Green Building Council (IGBC) Platinum certification. MU1 is powered by renewable energy, and the company says that the campus offers customers the flexibility of hybrid cooling, strong network connectivity, and hyperscale-grade infrastructure — strategically located next to the Kalwa substation.

strong market confidence in AirTrunk’s growth and sustainability strategy. The financing structure highlights the strength, depth, and international scale of Singapore’s financial ecosystem,” said AirTrunk Founder & Chief Executive Officer, Robin Khuda.

Located in Loyang, Singapore, the data centre has been designed to achieve a BCA GreenMark Platinum rating and a Power Usage Effectiveness (PUE) of 1.20, which is amongst the lowest data centre PUEs in Singapore. The campus will also feature green concrete and green steel throughout. AirTrunk will partner with technology companies to develop the data center which will provide cloud and artificial intelligence compute capacity for Singapore and the Southeast Asian region.

The loan aligns with the Technical Screening Criteria of the Singapore-Asia Taxonomy for Sustainable Finance and AirTrunk’s Green Financing Framework, as well as reflects growing momentum in the shift toward responsible investment with Singapore at the forefront as a leading global green finance hub.

“AirTrunk’s SGP2 facility sets a new benchmark for responsible infrastructure development in Asia. Its innovative green loan structure with option to convert into a sustainability linked loan (SLL) at a later stage reflects a holistic approach to long-term impact,” said Jasmine Zhang, Crédit Agricole CIB’s Head of Telecom Finance for Asia-Pacific.

AirTrunk SGP2

Vietnam introduces sweeping reforms for data center industry

Vietnam has introduced comprehensive reforms in the telecommunications sector which include simplified processes for data centers and cloud providers thus bringing regulatory clarity to the industry.

The decree, which took effect on 1 July, 2025, aligns with the country’s national digital strategy which aims to position Vietnam as a regional digital hub. The government hopes that the new rules will encourage the growth of sovereign and commercial data centers in the country as well as making cross-border operations secure including localizing data, and data governance.

Major revamps include provinciallevel licensing for faster local deployment and a fully digitalised procedure framework.

The provincial People’s Committees can now handle licensing and registration for certain

telecom services, including fixedline, data centers and cloud services, enabling faster turnaround and streamlined procedures for foreign and domestic investors. Firms offering both data center and cloud computing services are now required to submit only a single registration form instead of separate filings.

This reduces regulatory overlap and promotes administrative efficiency. On a macro scale, it helps to integrate hybrid infrastructure models and encourages seamless investment

in edge and hyperscale data center facilities.

All applications will now be processed via the National Public Service Portal or standard postal or onsite options, ensuring transparency, minimal manual handling and lower operational costs for enterprises. For companies that intend to cease their services or change license terms, they must now give advance notice and fulfil some formal procedures. This serves to protect customers and ensure service continuity.

AWS signs PPA with Gentari for 80MW wind power project in Tamil Nadu

Amazon Web Services (AWS) has signed a Power Purchase Agreement (PPA) with clean energy provider Gentari Sdn Bhd for an 80MW wind energy project in the southern Indian state of Tamil Nadu. The project is expected to commence operations in mid-2027, and generate approximately 300,000 megawatt-hours (MWh) of renewable energy annually.

This PPA follows a collaboration agreement signed between the two in 2023, for cost-effective, utility-scale renewable energy projects, and is advancing Gentari’s vision of putting clean energy into action, while supporting AWS’s goal of reaching netzero carbon by 2040.

The project will be part of the larger Karur Wind Development located in Tamil Nadu, which is also

home to similar projects by Everrenew Energy and Tata Power. India is one of the major data center markets in APAC, with Mumbai and Chennai representing two of the biggest digital infrastructure hubs in the country. Given the huge energy consumption by data centers, and the overall growth in awareness surrounding sustainability, India has ambitious plans to develop solar and wind power.

VietNam

Center3 earmarks

US$ 10 billion for 1GW expansion by 2030

Center3, a provider of carrierneutral data centers and international connectivity solutions in MENA, has announced that it is planning an expansion of its data center infrastructure by 1 GW by 2030. Center3 has earmarked US$ 10 billion for this purpose. The expansion is necessitated by the surging regional demand for AI, cloud, and hyperscaler services.

As part of this growth, Center3 is developing new high-density, hyperscaler-ready data centers, with a goal to reach 300 MW of total installed capacity by 2027. These facilities are located across Saudi Arabia, Bahrain, and other key international locations, providing a foundation for AI workloads, hyperscaler cloud expansion, and mission-critical enterprise and governmental operations.

Center3 further said that these data centers will be optimized to support demanding AI and high-performance computing (HPC) workloads, to address evolving AI inference and training requirements. Beyond capacity expansion, Center3 is committed to sustainability, integrating renewable energy, energy-efficient cooling, and responsible resource management into its data center operations. This development is central to reinforcing Saudi Arabia’s position as a strategic digital hub, aligning with Vision 2030’s digital transformation goals and the aim to localize digital content and services in the region.

SoftBank to build two landing stations for E2A subsea cable in Japan

SoftBank, a Japanese multinational investment holding company, will establish two new subsea cable landing stations for the East Asia to North America (E2A) submarine cable system in Japan.

The landing stations will be built in Tomakomai City of Hokkaido Prefecture and Itoshima City of Fukuoka Prefecture. SoftBank revealed that it was selected by the Ministry of Internal Affairs and Communications of the Government of Japan to build the two landing stations as part of an initiative to strengthen the nation’s digital infrastructure by multiplying international submarine cable routes.

E2A is a subsea cable system that will link major digital hubs in Asia and North America, with landings in Maruyama (Chiba, Japan), Toucheng (Taiwan), Busan (South Korea) and Morro Bay (California, USA). Readers would recall that on March 21 this year, Japan’s SoftBank had signed a contract to join a consortium with South Korea’s SK Broadband, Taiwan’s Chunghwa Telecom, and Verizon Business Global LLC to build the E2A cable system.

Shedding light on the choice of

locations, it said that the two facilities will help diversify geographical risk. “Hokkaido (the northernmost main island) and Kyushu (the southernmost main island) – particularly Fukuoka Prefecture – are expected to see an increased concentration of digital infrastructure as they serve as core regions capable of substituting and complementing infrastructure already established in the central Tokyo and Osaka metropolitan areas,” it said in a statement.

It further explained, “Tomakomai City and Itoshima City are located in optimal locations for routes that connect Asia and North America. Their sufficient distance from existing landing stations—including the SoftBank Maruyama Landing Station located in Minamiboso City, Chiba Prefecture near Tokyo—also makes them optimal for geographical risk diversification.”

Image courtesy: SoftBank

Maximizing Global Sustainability, we ‘stand by U’

The prefabrication revolution in data centers

Not the biggest, but the fastest. How prefabrication is reshaping the data center race.

Data centers were once seen as nothing more than “four walls and a shell” – borrowing from commercial real estate jargon to describe a weather-tight exterior envelope that keeps the elements out. Building this shell is the easy part. The real challenge lies in the laborious work of fitting out: a sequential process where mechanical, electrical, and plumbing contractors take turns installing their systems.

But as AI transforms the digital landscape and pressure mounts to drastically reduce build times, traditional norms are quickly eroding. The spotlight has shifted from the structure itself to the increasingly complex piping and cabling systems within. How can data center designers adapt a rigid, slow-to-build paradigm for a future where speed is paramount?

The urgency of speed

The data center industry looked to the construction industry for inspiration to reduce build times. For a start, precast concrete panels and modular building components could be manufactured off-site while foundations are being prepared, dramatically shortening the shell phase. This parallel approach has already proven itself in warehouses and commercial buildings, where entire wall sections arrive on trucks ready to be craned into position. Data center operators did this for many years, though the fit-out phase was still manually done and slow.

An attempt to further speed things up came in the form of building data centers into ubiquitous shipping containers, a concept most visibly championed by IO Data Centers. Rather

The spotlight has shifted from the structure itself to the increasingly complex piping and cabling systems within.

than constructing traditional white space for a data hall, IO developed purpose-built steel modules similar in size to 40-foot shipping containers but engineered specifically for IT equipment. Each module came complete with fire suppression, security, integrated cooling and power systems, ready to be positioned on a concrete pad or within a shell building. Plug in power leads and chilled water, and everything is ready to go.

Unfortunately, the container form factor has several limitations. For one,

All images via Freepik

the steel structure’s tight confines constrained airflow paths and reduced surface area, which restricted workload density. Specifically, the limited space complicated the routing of chilled water pipes to individual racks, while narrow aisles made hardware maintenance slow and difficult. After a few years, IO quietly pivoted back to conventional colocation halls where its conventional solutions fared better. Eventually, the company was carved out and its various assets divested to various parties.

This is not to say that containerized solutions are dead. Today, various equipment vendors continue to make a full range of containerized solutions for niche industries and specialized use cases. These range from edge data center deployments to installations in remote locations that are difficult for traditional construction crews to access.

Assembly in days, not months

As data centers grew bigger, other

innovations took place. Equipment makers started redesigning the complex mechanical, electrical, and plumbing (MEP) systems such that they arrive ready to be wired up instead of requiring months of on-site assembly. Power skids and chilled-water plants arrive on-site finished, turning months of field labor into mere days of crane setting. These prefabricated modules also solved the coordination nightmare of having multiple contractors working in the same limited space.

Elsewhere, hyperscalers started to demand standardized designs that could be replicated globally. And they started seeing data centres as products that conform to required specifications, not messy construction projects. This gave rise to reference designs, which are essentially blueprints that can be deployed anywhere with minimal customization.

The final leap follows naturally: Why not prefabricate entire data center modules at the factory? These could be tested before being packaged and shipped complete to their destination. Because these modules are built larger than traditional shipping containers, they avoid the space limitations that plagued earlier containerized solutions. With customers looking to build entire data centers rather than individual components, vendor lock-in becomes less of a concern. Most importantly, everything arrives pre-tested and validated against a reference design,

Finally, commissioning teams still need to verify that integrated systems work together as designed, testing failover scenarios and load balancing across the facility.

effectively shifting quality control from the chaotic construction site to the controlled factory environment.

Site preparation is crucial here. Foundations must be set properly, utility connections built, and access roads prepared for oversized loads. The modules themselves, some weighing hundreds of tons, must be shipped in and connected in the right sequence. Finally, commissioning teams still need to verify that integrated systems work together as designed, testing failover scenarios and load balancing across the facility.

When approvals unlock speed

Despite the advanced prefabrication techniques today, building a modern data center is a lot more than shipping in prefabricated components and plugging them together like Lego blocks. This is where fast-track access to resources such as power and water can make a big difference.

In countries such as Malaysia, the authorities have established dedicated data center parks with pre-approved power allocations and ready utility connections, allowing operators to bypass years of permitting and infrastructure development. Indeed, Chinese data center operator BrightRay, using prefabricated parts and Johor’s Sedenak Tech Park fast-track approvals managed to build its new data center in just eight months.

For a growing number of data centres, prefabrication has shifted from clever workaround to default strategy. When this production-line approach is paired with policy enablers, multiyear builds collapse to mere months, giving operators the speed edge the AI era demands. In the race to deploy AI infrastructure, the winners won’t be those who build the biggest data centers – but those who build them the fastest.

Designing adaptive data centers for India’s heartland

As digital services surge in India’s heartland, smaller towns have emerged as the new frontier for data center development.

There’s more to India’s data center market than the glitzy metros of Mumbai, Chennai, Hyderabad, Delhi NCR, and Bangalore. Digital services are being consumed in Meerut, Cuttack, Haridwar, Darjeeling, and Bhopal too. This necessitates the deployment of reliable digital infrastructure in the heart of India.

But what will a data center look like in these parts of the country, especially given concerns surrounding scalability and sustainability?

India’s tech-savvy heartland

Even today, nearly three-quarters of India’s population lives in nonmetropolitan areas such as tier II and III cities, semi-urban and rural areas. In fact, by some estimates, rural India could be home to two-thirds of Indians. But even these citizens are consuming data on par with their metropolitan counterparts.

With the ubiquity of affordable smartphones, Indians are using digital services at record rates. Today, even villagers are using online services for everything from checking weather forecasts, to maritime and agricultural

advice, buying and selling farm produce, tele-medicine, e-governance, and net banking. And then there are the many social media apps that have given voice to the small-town influencer.

This is why data centers are needed even in the heartland. Does this mean we will have to design data centers differently? Do we need a comprehensive deployment plan? The demand will only grow, so these facilities need to be flexible and adaptive.

But, what exactly is an adaptive data center in the context of an ever-

changing digital landscape in a country like India?

“An adaptive data center is not merely a facility that can scale; it is an intelligent ecosystem that can reconfigure itself dynamically to meet changing demands while maintaining optimal efficiency, reliability, and sustainability,” says Surajit Chatterjee, Managing Director & Head, Data Centres - India, CapitaLand Investment. “This concept goes beyond traditional notions of flexibility to embrace continuous evolution as a core design principle.”

Some upcoming non-metro data center markets

At present, several large and small data center companies have invested in facilities in places like Patna in Bihar, Indore in Madhya Pradesh, Jaipur in Rajasthan and Lucknow in Uttar Pradesh. It is noteworthy that these states until recently bore the ignominy of being labeled as BIMARU (an acronym that sounds like the Hindi word for sick or unwell) states given decades of economic stagnation. But with increased globalisation, change becomes inevitable.

Now, Rackbank has made Indore its home base, and is also investing in an AI data center park in Raipur, Chhattisgarh. This project is being built at an initial investment of Rs 1,000 crores, and will come up on 13.5 acres of land. Construction will span 4 phases, with initial capacity of 80MW in Phase 1, which will be scalable to 160MW in Phase 4.

Meanwhile, Yotta Data Services

RackBank launches AI data center park in Nava Raipur, Chhattisgarh | Image courtesy: RackBank Datacenters Pvt. Ltd.)

already has Yotta G1, a 2MW cloud data center, up and running in GIFT City, an upcoming global digital hub in Gujarat. Another major player, CtrlS Datacenters, has invested in edge facilities in Patna and Lucknow.

Data centers are also being built in Mysore, Coimbatore, Bhubaneshwar and Vizag. Some are planned as hyperscale facilities, some are edge data centers, some are even being built in shipping containers. Small modular reactors are being considered as power sources, indicating the growing consciousness surrounding decarbonization.

The heartland gets a billion-dollar glow up

Readers would recall that earlier this year, Techno Digital Infra Pvt. Ltd., the wholly owned digital infrastructure arm of Techno Electric & Engineering Company Ltd. (TEECL), had announced plans to develop hyperscale and edge data centers totaling 250MW across India, backed by an ambitious investment plan worth US$ 1 billion. The company has also entered into a partnership with RailTel Corporation of India, a public enterprise under the Ministry of Railways, to build 102 edge data centers across the country.

This project aims to bring low-latency computing closer to users in Tier II and Tier III cities.

“For me, an adaptive data center is more than just physical infrastructure, it’s a living engine of digital transformation that evolves continuously to meet changing demands,” says Amit Agrawal, President, Techno Digital. “Realizing this

vision requires close partnership with local communities, investment in skilled talent, and a commitment to energy efficiency and operational resilience. Our hope is to build centers that are not only scalable and sustainable but also drive regional growth and unlock new opportunities for millions.”

Data centers as public utilities

Readers would recall that in 2016, the UN Human Rights Council had released a non-binding resolution condemning international disruption of the internet by governments. Many countries in the EU have passed various legislations granting their citizens the right to access the internet, some have even recognised it as a basic human right.

In India, while the Right to Internet has not been officially declared a fundamental right, the importance of internet access has been recognised by courts across the nation, especially in the context of the fundamental right to freedom of speech and expression, as well as the right to access government services, welfare schemes and benefits.

In light of this, one wonders if in the near future data centers would become as common as utility poles and post boxes of yesteryears. Data center industry veteran Sandeep Dandekar certainly thinks so.

“Through the last fifteen odd years, we moved from a few bulky telephone exchanges and wired landlines to millions of mobiles connected everywhere through 100-s of towers. Data centers will most certainly follow the same trajectory,” feels Dandekar. “From today’s massive, centralized facilities, we are moving toward a

This project aims to bring low-latency computing closer to users in Tier II and Tier III cities.

ubiquitous fabric of nano, micro, and metro-scale nodes — as common as mobile towers, yet infinitely smarter. They won’t just store and serve; they will sense, decide, and power human life in real time,” he explains.

“Just imagine - could a driverless car run safely through an Indian town say a 1,000 kilometers away from a metro city without such a deep penetrated digital backbone?” he asks, giving an example of a technology that is actively developing across the world. “The data center of the future is no longer only a huge building in large cities but; it will be the civic utility pulsing through the hub and spoke connecting every city, town, and highway - as vital as air and electricity.”

So even if you were traipsing between metros in the above mentioned driverless car, you will pass through the heartland, and this is where we need to build a robust and reliable ecosystem of adaptable digital infrastructure.

The future doesn’t lie in the pages of fiction. The future is here. It is unfolding as we speak - not only in the cyberpunk cities of China, the technology hubs of the Middle East, or the cobblestone streets of Europe, but also in the town squares across India’s heartland, a part of the world that can no longer be left underserved.

Representational image of railway tracks in India | Image by Dr. Chinchu C. via Wikimedia Commons
Amit Agrawal, President, Techno Digital
Sandeep Dandekar, Data Center Expert

The tent data centers powering the AI race

Meta is betting that fabric structures and fresh air cooling can win the AI race through sheer speed. But can they stand the test of time?

Data centers are typically associated with fixed and boring fixtures from the exterior, but times are changing. The AI arms race is intensifying with Sam Altman saying OpenAI plans to spend trillions of dollars on data center buildout to stay ahead of the race. Now, how would other big tech companies in the running plan to beat that?

One novel strategy is to deploy simple structures that do not require months of civil engineering work

to build. These structures contain high-tech computing infrastructure and enable faster scaling for AI demands. That’s what Meta’s CEO Mark Zuckerberg is thinking – and it seems to work. These easy-to-build structures utilise aluminium frames (aerospace grade, apparently) covered with robust fabric and look like tents, hence are called “tent” data centers.

But don’t let these simple structures deceive you into thinking that they are just temporary one-off structures. An experiment in 2008 by two Microsoft design pioneers proved that racks of servers inside a tent structure can last for several months with no server failure or downtime. So, contrary to popular belief, tent-like structures are tougher than they look.

While Meta only plans for these camp-like structures to be temporary while waiting for completion of their multiple, multi-gigawatt data centers, these tent data centers are actually proof that they can be deployed at scale for anyone hoping to build capacity as fast as possible. Built within weeks (three weeks is the fastest estimate), they can be immediately put to work as GPU (Graphics Processing Unit) hubs, with billions of dollars worth of compute inside, no less.

These ultra-light tents come with many perks – they’re hurricane-proof (as claimed by Zuckerberg), cheap, utilize basic HVAC (Heating, Ventilation, and Air Conditioning) optimized for free air cooling, and can easily be dismantled or reconfigured. Incidentally, how often do you encounter a hurricane?

They also require far less permitting and regulatory compliance, a great advantage considering that regular

data centers can take months to get their permits approved.

Hence, despite all its disadvantages – such as susceptibility to heat in summer, no backup generator or robust power infrastructure resulting in less PUE, tent data centers do work, and very well too, with their prefabricated power and cooling modules. Both its estimated liquid cooling efficiency and uptime are tagged at about 95 per cent.

In Meta’s case, power comes from the tech giant’s on-site substations (estimated 400 MW) although some tents might get it from the grid depending on how Meta manages its power distribution. Enabling massive AI compute capacity (estimated US$ 2 - 3

Contrary to popular belief, tent-like structures are tougher than they look.

billion under each tent with 20,000 plus GPUs per site) to go live in weeks would be a huge competitive advantage –speed without sacrificing too much efficiency and resiliency.

With some observers thinking that AI compute hardware will only last three to four years anyway before technical obsolescence renders them useless, this could be one of the most practical solutions coming out of the industry, if the disadvantages could be resolved or minimized.

Proponents of tent data centers enthuse that today’s “crazy” will become tomorrow’s standard, and that to win the AI race, you have to trade redundancy for speed.

“This

may well be a proof of concept to show the ‘art of the possible’.”

“On the other hand, this strategy is not suitable for all, and this may well be a proof of concept to show the ‘art of the possible’,” opines James Rix, JLL’s Head of Data Centre and Industrial, Malaysia and Indonesia.

Be that as it may, innovation is the lifeblood of the digital infrastructure industry, and with AI accelerating demand, we could very well see massive improvements over the current limitations of these new GPU housing. It could well herald a new type of rapid modular deployments where speed to market is paramount - because delays can cost billions.

Tent vs Hyperscale Data Center

Temporary for AI arms race; huge potential for widespread use

Hurricane-proof, weather-proof but susceptible to extreme heat

Hyperscale

Long-term structure for AI & cloud

Hardened construction that can last for decades; Tier III & IV standard Diesel generator, batteries, substations, redundancy built in

Reinforced walls, multi-layer security, 24/7 guards

Meta Tent DC
Photos courtesy: Semi Analysis

Factory-built, site-assembled. The new speed of data center deployment

Here’s how BrightRay went from bare ground to operational data center in eight months.

In just 22 days, three floors of a 16.2MW data center were craned into place in Johor, Malaysia. This was the most dramatic phase in the construction of MY-01, a project by BrightRay that went from bare ground to a fully operational facility in a record-breaking eight months. This feat of speed comes amid an unprecedented data center boom. In 2020, Johor had no operational data centers. Today, more than 487MW of IT load is online, with another 5.8GW planned, according to DC Byte. While many new facilities are going from groundbreaking to operational in barely more than a year, BrightRay’s eightmonth timeline sets a new benchmark for what’s possible when you rethink construction from the ground up.

Breaking the speed barrier

How did BrightRay pull this off? By building Malaysia’s first fully prefabricated modular data center. MY-01, the first of three data centers planned for the BrightRay campus in Sedenak Tech Park, launched in May this year. The three-story facility delivers 16.2MW of IT load with a PUE below 1.4, designed for traditional air-cooled workloads of up to 12kW per rack. When fully built out, the campus will provide 85.8MW of capacity across three facilities, with the next two buildings catering to liquid-cooled workloads.

MY-01 itself employs proven technologies: fan wall cooling systems for efficient air management, VRLA batteries for uninterruptible power supply, and thermal storage tanks for cooling optimization. These conventional choices complement the unconventional construction method. The prefabricated approach allowed BrightRay to compress what typically takes 18-24 months into just eight months, from foundation to operational facility, setting a new benchmark for data center deployment speed in Malaysia.

What stood out during a recent site visit was how ordinary MY-01 appears. While hints of its prefabricated origins exist, they’re subtle. Branch corridors feel slightly narrower than in traditionally built facilities, creating a marginally more compact sense of space. The overall design and layout

are also notably square. But these are minor trade-offs for the dramatic reduction in construction time.

The factory advantage

BrightRay’s speed advantage comes from more than a decade of experience, beginning with its acquisition of IBM’s data center division in China. Today the company is China’s largest data center EPC contractor and has brought that prefabrication expertise to Malaysia. Most prefabricated data centers rely on container-based modules with their inherent height limitations. BrightRay chose another route: prefabricating the entire structure, including walls and shell, which are then shipped and assembled on site. This explains MY-01’s unusually generous ceilings. Each floor ranges from six to eight meters high, compared to the cramped interiors

Photos courtesy: BRIGHTRAY

typical of container designs.

What sets BrightRay apart is the extent of factory production. About 90 percent of components are built and tested under controlled conditions. Workers fabricate the steel structure, install pipelines and cable trays, mount critical equipment, and complete module testing before anything ships to Malaysia. On-site work is limited to foundations, final connections, and waterproofing.

The factory approach offers unexpected benefits beyond speed. Workers operate in controlled conditions rather than on exposed construction sites. Local air pollution drops significantly since most fabrication happens elsewhere. Fewer on-site workers means less disruption to surrounding areas. The prefabricated structure can even be disassembled and relocated after its useful life,

Whether this represents the future of data center construction or one company’s push for speed, the efficiency gains are striking

though whether anyone would actually do this remains theoretical.

Of course, what’s unknown is whether shipping practically the entire data center piecemeal costs more than building it on location. Moreover, this presupposes an existing factory to build and fit out the various components.

Putting it all together

Regardless, this process marks a clear departure from conventional construction, where nearly everything is built on site and exposed to weather disruptions and coordination delays. At MY-01, large prefabricated sections are craned into place, compressing timelines and delivering a facility far faster than traditional methods.

The timeline tells the story. Foundation work took 70 days, followed by just 22 days to assemble three floors

of prefabricated modules. Another month was spent on waterproofing and external finishing. In all, the main structure of MY-01 was completed in about three months from piling to enclosure. Additional steps remained, such as pouring concrete in specific areas, integrating systems, and installing fire-rated components, but the bulk of the build was already in place.

Local sourcing played a supporting role. The diesel backup generators sit on a multi-story steel platform built from materials supplied in Malaysia, while most of the structure itself was shipped in prefabricated form from China.

Whether this represents the future of data center construction or one company’s push for speed, the efficiency gains are striking. Against the backdrop of traditional builds that often stretch 18 to 24 months, MY-01’s rapid completion makes conventional approaches look increasingly outdated.

The new construction playbook

BrightRay claims its strategy can deliver a 30 to 60-megawatt data center in as little as six months under the right conditions. For hyperscale customers racing to deploy AI infrastructure, this speed matters. Waiting more than a year for a new facility means falling behind in markets that shift by the quarter.

The prefabricated model still allows for customization, with designs adapted to client requirements much like traditional builds. But there’s a less obvious advantage: flexibility. Because the structure is manufactured in modules, an entire facility could theoretically be dismantled, moved to a new site, or even sold for scrap value. This possibility sets it apart from permanent concrete buildings and hints at new ways to think about data center assets.

BrightRay’s MY-01 demonstrates how prefabrication can transform data center construction, delivering speed, flexibility, and new possibilities. Whether this becomes the industry norm or remains a specialized model, the project underscores a simple truth: in a world where compute capacity is the new currency, the ability to deploy data centers in months rather than years could reshape the competitive landscape.

How smart contracts can protect data center development from market disruption

As market volatility meets long-term commitments, sophisticated contract design becomes crucial for protecting data center investments.

When markets change rapidly, contracts can feel disconnected from reality. In data center development, this disconnect is particularly dangerous: the market evolves at breakneck speed due to AI demand, supply chain constraints, and shifting regulations, yet contracts often span decades.

This tension increasingly leads to disputes as parties seek to vary or exit agreements that no longer make commercial sense. Understanding the key pressure points, and building contracts that anticipate them, has become essential.

Three

disruptions reshaping

development

Today’s data center development faces pressure from multiple directions, but three issues consistently trigger contractual conflicts.

First, the power market has become increasingly volatile. AI’s massive energy demands, limited grid capacity, and growing competition have created a standoff. Notably, utility companies trying to justify investment in desperately needed power plants and grids are seeking long-term offtake agreements. However, developers and owners remain hesitant to commit to long-term arrangements due to uncertainty over the trajectory of the

However, developers and owners remain hesitant to commit to long-term arrangements due to uncertainty over the trajectory of the data center market.

data center market. This delays critical capacity additions, means that new projects stall, and puts businesses at all levels at risk of breach of contract for failure to deliver.

In addition, hardware shortages have reached crisis levels. Heavy-duty power transformers now require twoto-four-year lead times, while backup generators carry two-to-three-year waiting lists. Rapid technological shifts, particularly the transition to liquid cooling, risk making today’s orders obsolete on arrival. This has resulted in a number of legal issues, as buyers and sellers look to terminate contracts that are no longer advantageous, and hardware suppliers withdraw from distribution agreements to sell directly, taking advantage of the seller-friendly market.

Finally, community resistance has evolved into sophisticated legal challenges. Local groups now raise complex procedural objections alongside environmental concerns. In one recent case, a judge voided a

*This exclusive article was co-authored by:

massive data center corridor’s rezoning because the county had failed to properly advertise public hearings and had disregarded community input. Reports indicate US$64 billion in US data center projects have been blocked or delayed by local opposition.

Building resilience through contract design

While these challenges seem daunting, proactive legal steps can maximize resilience against these issues.

Managing variations has become critical in an environment of inflation and limited supply. In such an environment, suppliers often face unforeseen increases in raw material, energy, or labor costs, leading to disputes where suppliers demand contract amendments or seek to terminate on technical grounds.

The solution is to ensure that any variations occur on an agreed basis by including clauses that require variations to be agreed in writing and signed. This minimizes the chance that any variation occurs as a result of a representation not intended to have legal force. Furthermore, where price may vary (e.g. with raw materials), consider using multi-directional price variation clauses to enable price to be tied to objective public cost indices, to create a more balanced risk distribution and to avoid costly legal battles.

Structuring around impossibility is equally important when external factors impact contracts. Parties may argue that obligations are no longer feasible, relying on force majeure or material adverse change clauses, or legal concepts such as frustration. COVID-19 and the Russia-Ukraine war were followed by a wave of claims under such provisions.

The solution is to clearly define both what is deemed force majeure and what happens if force majeure occurs. When deciding on these points, it is important to be as thorough as possible. Force majeure provisions are narrowly interpreted, so if a certain type of event is not covered (e.g. sanctions) then it will not fall within the provision, and if the agreed result of force majeure is ill-defined, parties may be both unable to act and unable to terminate. This is particularly important because boilerplate force majeure provisions

Structuring

around impossibility is equally important when external factors impact contracts.

are not well tailored to the needs of data centers. Delivery and performance obligations require similar attention in a dynamic market where suppliers may juggle their obligations to maximize profits, causing delays elsewhere. The solution is to define precise delivery schedules with clear, enforceable penalties for non-compliance.

Penalties, often in the form of liquidated damages, should provide a pre-agreed remedy for delays and breach without the need for costly litigation. Contracts should also define a party’s obligations to use “reasonable”, or even “best” commercial endeavors to carry out their obligations, e.g. to ensure that suppliers make a legitimate, and not a token attempt, to source alternative goods in the case that their default supply isn’t available.

The path forward

Data centers straddle long-term infrastructure and the fast-paced AI economy, making contracts liable to becoming broken or unbalanced long before they expire. Buyers and sellers must therefore be proactive: contracts should be fortified with detailed risk allocation mechanisms, allowing for redundancy, flexibility and penalties. By embedding these protections, stakeholders can avoid lawsuits and preserve relationships throughout their supply chains, to the benefit of all involved. In a market where delays cost millions and $64 billion in projects hang in the balance, sophisticated contract design has evolved from a legal nicety to a business necessity.

Building tomorrow’s digital infrastructure today

The digital landscape is undergoing a seismic transformation. From AI-driven applications demanding unprecedented computational power to sustainability imperatives reshaping operational strategies, data centers face complex challenges that require fundamentally new approaches.

As someone who has witnessed the evolution of India’s digital infrastructure from the front lines, I believe we are at an inflection point where adaptability is not just an advantage, it is essential for survival.

The imperative for adaptation

The traditional data center model, designed for predictable workloads and linear growth, is rapidly becoming obsolete. Today’s data centers must contend with AI applications that can consume 30 to 40 times more power than standard enterprise workloads, edge computing requirements that demand ultra-low latency, and sustainability mandates that require dramatic reductions in carbon emissions. This convergence of forces demands a new paradigm: the adaptive data center.

An adaptive data center is not

merely a facility that can scale, it is an intelligent ecosystem that can reconfigure itself dynamically to meet changing demands while maintaining optimal efficiency, reliability, and sustainability. This concept goes beyond traditional notions of flexibility to embrace continuous evolution as a core design principle.

The four pillars of adaptive design

1. Robust Infrastructure Architecture

The foundation of any adaptive data center lies in its ability to support variable and evolving requirements without compromising performance. This begins with purposeful design that enables rapid deployment and flexible configurations. Modern data centers can reduce deployment timelines significantly, providing the agility needed to respond to market demands.

More importantly, adaptive infrastructure must support variable power densities seamlessly. Advanced facilities now incorporate N+N redundant power systems with dual power distribution options, enabling the accommodation of mixed workloads from traditional enterprise applications to high-density AI clusters. The ability to provide up to 2,000 kg per square metre floor loading capacity ensures that even the heaviest computational equipment can be supported without compromise.

2. Intelligent Operational Systems

True adaptability requires intelligence embedded throughout

the infrastructure stack. Modern data centers leverage AI and ML not just for customer applications, but for optimizing their own operations. Advanced cooling systems with N+N redundant configurations maintain optimal conditions across variable density deployments, while intelligent energy management systems optimize power consumption in real-time.

This intelligence extends to thermal management, where sophisticated cooling technologies, including options for both air-cooling and liquid cooling solutions, ensure that facilities can adapt to the thermal requirements of diverse workloads while maintaining energy efficiency.

3. Sustainability-First Operations

Sustainability is no longer a secondary consideration, it has become a primary driver of data center design and operation. Adaptive data centers integrate sustainability into their core architecture rather than treating it as an afterthought. This includes renewable energy integration through solar panel installations, water economization systems, and high-efficiency chillers that minimize environmental impact.

The most forward-thinking facilities are pursuing LEED Gold certification and incorporating intelligent energy and infrastructure management systems that continuously optimize resource consumption. These capabilities are essential in India’s evolving regulatory environment, where sustainability credentials are increasingly important for both compliance and competitive advantage.

4. Comprehensive Connectivity Solutions

The future of digital infrastructure requires seamless integration across multiple connectivity options. Adaptive data centers must provide carrier-neutral environments with diversified fiber entry points and

dedicated data shafts. This approach enables customers to optimize connectivity based on latency, cost, and performance requirements without being locked into specific provider relationships.

In India’s geographically diverse market, this capability is particularly valuable. Strategic proximity to cable landing stations, such as access to multiple landing stations, ensures low-latency connectivity while multiple connectivity options provide the redundancy needed for mission-critical applications.

Key technologies driving adaptation

These adaptive principles are enabled by several key technologies working in concert. Key examples include advanced cooling systems that provide options for both hot/cold aisle containment and liquid cooling enable higher density deployments while improving energy efficiency. Gas-insulated switchgear substations provide reliable power delivery at transmission levels, ensuring consistent performance even during peak demand periods.

Perhaps most importantly, the integration of comprehensive security systems with a minimum of seven-layer security protocols and 24/7 monitoring capabilities provides the foundation for secure, adaptive operations that can evolve with changing threat landscapes.

The Indian opportunity

India is uniquely positioned to lead the adaptive data center revolution. The country’s rapid digital transformation, supportive government policies, and growing demand for diverse computational services create an ideal environment for adaptive infrastructure deployment.

The global data center market is expected to grow significantly from

2025 to 2032 exhibiting a CAGR of 11.7% during the forecast period. The data center capacity of India was estimated at 1.1 GW in 2024. This demand is expected to reach around 6 GW by 2033. Much of the demand is expected to be met through large-format, hyperscaleready infrastructure in core markets, alongside edge-ready capacity in Tier 2 and Tier 3 cities for latency-sensitive workloads, projecting a 25-30% CAGR for the sector.

There is an unprecedented opportunity to build adaptability into the foundation of India’s digital infrastructure. The regulatory environment is also favorable, with data center policy frameworks at both center and state levels encouraging sustainable and efficient operations. This alignment between policy objectives and operational requirements creates a powerful incentive for adaptive design.

Building for tomorrow

Realizing this adaptive vision requires addressing several implementation challenges. The complexity of managing dynamic infrastructure requires new operational models and skill sets. Traditional metrics like uptime and capacity utilization must be supplemented with adaptability and efficiency measures. Investment models must account for the higher initial costs of adaptive infrastructure while recognizing the long-term operational advantages.

Perhaps most significantly, the industry must focus on solutions that provide genuine flexibility without compromising reliability. The most successful adaptive data centers will be those that combine robust engineering with operational excellence.

The data centers we build today will serve India’s digital economy

The traditional data center model, designed for predictable workloads and linear growth, is rapidly becoming obsolete

for decades to come. By embracing adaptive design principles now, we can ensure that these facilities will remain relevant and efficient as technology continues to evolve. This is not just about future-ready infrastructure, it is about creating the foundation for India’s continued leadership in the global digital economy.

The transition to adaptive data centers represents both a significant challenge and an extraordinary opportunity. Organizations that embrace this transformation will find themselves with competitive advantages that compound over time: lower operational costs, improved sustainability performance, greater customer satisfaction, and the ability to capitalize on emerging opportunities as they arise.

As we stand at this critical juncture, the question is not whether adaptive data centers will become the norm, but how quickly we can make this transformation. The organizations and countries that act decisively will define the next era of digital infrastructure, while those that wait will find themselves struggling to catch up.

At CapitaLand Data Centre (CLDC), we are already implementing many of these adaptive principles across our facilities in Mumbai, Hyderabad, Chennai and Bangalore. Our comprehensive approach, from N+N redundant power systems and advanced cooling solutions to carrierneutral connectivity and comprehensive security protocols, exemplifies the kind of robust, flexible infrastructure that India’s digital transformation demands. Through strategic investments in sustainability initiatives including LEED Gold certification targets, intelligent energy management, and renewable energy integration, we are not just building data centers; we are creating the adaptive digital infrastructure foundation that will power India’s next chapter of technological leadership.

The future of data centers is adaptive, intelligent, and sustainable. The time to build that future is now.

2026 Events Calendar

July

Greater Bay Area

Cloud and Datacenter Convention

Hong Kong

July

Hong Kong

Interconnect World Forum

Hong Kong

July

NEA Awards

Hong Kong

20 August

Vietnam Cloud and Datacenter Convention

Ho Chi Minh

BALI Week 24 August FOCUS w.media Connect Bali

11 September Korea Cloud and Datacenter Convention

22 October Malaysia Cloud and Datacenter Convention Kuala

September

September

September

and

How Leader Energy is helping data centers operators achieve true sustainability

Southeast Asia’s renewable energy landscape is changing fast, and data center operators are scrambling to secure green power for their facilities

Within the span of a few years, renewable energy has shifted from a nice-tohave to a basic requirement for attracting hyperscaler tenants and meeting sustainability targets.

In Malaysia, this shift has taken on new momentum with the introduction of the Corporate Renewable Energy Supply Scheme (CRESS) last year. The program enables data centers to offset 100% of their monthly consumption through direct procurement of renewables, a significant evolution from virtual power purchase agreements. Crucially, this regulatory framework represents a fundamental shift in how renewable energy is deployed and managed across Malaysia’s grid.

The renewable revolution reshaping Malaysia’s grid

At its core, renewable energy has fundamentally changed how we think about power generation. Pricing, predictability, and grid management – all have been transformed. Compounding this change, the economics have shifted dramatically

At its core, renewable energy has fundamentally changed how we think about power generation.

through rapid technological advancement and plummeting energy storage costs.

These changes naturally bring new complexities. Unlike traditional thermal plants that deliver consistent baseload power with 80% capacity factors, renewable sources are inherently intermittent. Solar, for instance, operates at just 18-20% capacity factor, generating power only 4-6 hours daily when the sun shines. Wind and hydro face similar variability.

Malaysia’s response shows how nations can navigate this shift successfully. With renewable

penetration already surpassing 20%, targeting 40% soon, and aiming for 100% by 2050, the country is building the infrastructure to manage intermittency. To smooth volatility and prevent frequency disruptions during the transition period, operators must maintain spinning reserves from conventional sources.

The game-changer is battery storage. With CAPEX dropping from US$400 per kilowatt hour to US$100 today, these systems could finally unlock renewable energy’s full potential. Technologies like lithium-ion, lithium phosphate, and sodium sulfur enable efficient charge-discharge cycles that transform intermittent generation into reliable power. Malaysia’s first 1,600MWh battery project, now attracting bids from Leader Energy and others, signals the country’s commitment to storage infrastructure.

LSE, Kedah, Malaysia

Solar’s evolution from experiment to infrastructure

Solar energy sits at the heart of Malaysia’s renewable transformation. Since the Renewable Energy Act introduced feed-in tariffs in 2011, the sector has matured from experimental installations to utility-scale deployments. The 2016 launch of the large-scale solar program established clear parameters: projects above 30MW must connect directly to the national grid, setting the stage for serious infrastructure development.

Leader Energy was among the first to embrace this shift. The company’s two pioneering projects under the program, Leader Solar Energy (LSE) and LSEII, under Large Scale Solar (LSS) I and LSS 2 at 38 MWp and 29.4 MWp respectively, were both completed on schedule and within budget. They established the operational benchmarks that define successful large scale solar deployment in Malaysia today.

The pace of technological change in photovoltaics continues to accelerate. Panel capacity has jumped from 100 watts in the early 2000s to 750 watts currently, with similar advances across inverters, transformers, and cables arriving every six months. This constant innovation drives down levelized cost of energy (LCOE), fundamentally reshaping the economic proposition of solar.

Of course, solar generates power only during four net sun hours daily – a

constraint that battery storage directly addresses. Leader Energy is Malaysia’s first IPP to integrate grid-connected battery storage at its Bukit Selambau LSE Il facility, extending output to six hours of renewable energy. As these systems evolve from grid-following to grid-forming capabilities, they will transform solar from an intermittent source into a reliable baseload contributor.

Meeting data center demands through CRESS

With solar-plus-battery systems now capable of delivering firm energy, Leader Energy is leveraging its pioneering renewable infrastructure to help data centers achieve true sustainability through CRESS. Modern data centers operate under a simple reality: hyperscaler tenants require verified renewable energy as a baseline requirement for any serious engagement.

Malaysia’s approach to meeting this demand has evolved through several iterations. Early adopters relied on renewable energy certificates through TNB’s Green Electricity Tariff program. Virtual power purchase agreements through the Corporate Green Power Programme (CGPP) scheme followed, enabling synthetic transactions through offsets. These mechanisms served their purpose but had clear limitations.

CRESS represents a more direct solution. The scheme allows data centers to procure power from offsite

solar-plus-battery installations, wheeled through the national grid with standard TNB transmission charges. Here, Leader Energy’s experience becomes crucial. Having deployed Malaysia’s first gridconnected battery storage at Bukit Selambau, the company understands how to transform intermittent solar into firm energy. As an example, a 20MW solar-plus-battery can generate 120MWh daily, fully offsetting a 5MW data center’s continuous consumption through CRESS’s monthly calculations.

This approach combines direct solar generation with battery discharge to create firm energy. Each megawatt hour comes bundled with internationally recognized RECs, providing the verified green power hyperscalers demand. Through CRESS, data centers can achieve genuine renewable procurement.

Powering tomorrow’s energy infrastructure

Data center operators searching for CRESS partners will find Leader Energy already has the requisite foundation already in place. The company anticipated the scheme years before launch, and moved to secure prime land with verified grid connections. This foresight, combined with successful projects, means they are ready to serve data centers today.

With three decades of energy expertise, AA+ credit ratings and proven project delivery – from Malaysia’s pioneering large-scale solar to the country’s first grid-battery integration – Leader Energy demonstrates the operational capabilities that make true sustainability possible.

The company offers flexible PPA terms with end-to-end project support from feasibility studies through operations and maintenance. Looking ahead, Leader Energy is developing AIdriven energy optimization and district cooling systems, expanding renewable integration beyond traditional solarplus-storage.

With CRESS-compliant infrastructure in place and grid connections secured, Leader Energy is already helping data centers move beyond certificates to achieve true sustainability today.

Q1: The 600kW Rack Q2: Cooling Trade-offs Q3: Net-Zero Every thing Q4: Blended Infrastructure Adver tising now open: Why Adver tise?

East Asia

China’s data center paradox

centers in China

Building earthquake resistant data centers

The state of play in China’s data center market

Despite the unsettling situation in China, the country looks set to see more stabilization in its data center landscape which so far has suffered from a mismatch in demand and supply.

When Bain Capitalbacked Chindata announced in August that it planned to invest RMB 24 billion (US$3.3 billion) to build three hyperscale AI data centers with 1.2GW total IT capacity in Ningxia province, north-central China, it raised quite a few eyebrows. This followed recent news that the country was planning to build massive data center campuses or “megaplexes” in Xinjiang province in western China.

A group of Bloomberg journalists, skeptical of the claim, went there and saw lots of construction going on but wondered if they would be able to obtain the 115,000 advanced Nvidia chips that they were seeking. Without these chips, these data centers would probably end up idle, like many others before them. Chindata’s entry into the region with a massive investment has changed the perception somewhat.

Oversupply meets underdemand

China’s data center market is known to be saturated with empty data centers due to weak demand, regional overbuilding and speculative investment. The building boom began in 2022 when the government

launched an ambitious infrastructure project called “Eastern Data, Western Computing”. Data centers would be built in the western regions such as Xinjiang, Qinghai and Inner Mongolia, where energy costs and labor are cheaper, in order to feed the huge energy demands from the eastern megacities.

At least 7,000 data centers have been registered up to August, according to government data. Last year alone saw a tenfold increase in state investment over a one-year period, with 24.7 billion yuan (US$ 3.4 billion) spent, compared to more than 2.4 billion yuan in 2023. Up to August this year, about 12.4 billion yuan (half of 2024’s total) had been invested.

New projects continue to be built in western regions especially in Xinjiang, driven by low electricity costs, government incentives, national network node designations and expectations that government and state-owned firms would fill up the space, observes a China-based analyst. “However, market fragmentation, lack of professional operators, and immature capital exit mechanisms— despite emerging REITs and insurance investments—hinder sustainable growth,” adds the analyst who declined to be named.

Furthermore, the idea of building

The idea of building data centers in remote western provinces lacks economic justification.

data centers in remote western provinces lacks economic justification. “The lower operating costs had to be viewed against degradation in performance and accessibility,” Charlie Chai, an analyst with 86Research told Reuters.

But the situation is more than meets the eye, according to Professor PS Lee, Head of Mechanical Engineering, at the National University of Singapore (NUS), who said, “The oversupply is concentrated in legacy, low-density, air-cooled capacity (often in western and central regions) that’s cheap to power but poor for latencysensitive or AI-training workloads. At the same time, there is a shortage of AI data centers capable of large-scale training and fast inference.”

“In terms of the specs, the underutilized halls typically house less than 10 kW/racks, are air-cooled and often come with modest backbone connectivity. New AI campuses target 80–150 kW/rack today, moving to 200

kW+ with direct-to-chip or rear-door liquid cooling, larger MV blocks, heat reuse, and high-performance fabrics,” he adds.

Even so, the rush to build across the country has created an unprecedented oversupply evidenced by over 100 cancellations in the past 18 months mainly driven by growing fears among local governments which had financed the buildings, that they might not see any profit.

This explains why the Chinese government has recently decided to set up a national cloud and compute marketplace to resell underutilized capacity as well as implement selective approvals of AI data centers. New data centers since March 20 have to comply with more conditions such as providing a power purchase agreement and a minimum utilisation ratio. Local governments are also banned from participating in small-sized data center projects. This would help solve the mismatch situation, the government hopes.

Private sector bets big despite challenges

Despite the unsettling and very fast evolving situation in China, the private sector still has full confidence in the future of China’s AI and the infrastructure backbone supporting it.

For example, Alibaba has committed to spend 380 billion yuan (US$53 billion) over three years to develop AI infrastructure while an additional 50 billion yuan in subsidies will be used to stimulate domestic demand.

“These investments are evidence of our optimism in the strong potential in the Chinese market which is the largest in the world. China’s LLM is developing very rapidly and we foresee the demand for AI leading to huge innovations in AI applications,” Toby Xu, Alibaba’s CFO, told Xinhua, China’s

official news agency.

NUS’ Lee sees the investment as a bold, market-making move that can pay off depending on its execution. “It aligns Alibaba’s Qwen stack with commerce and logistics, and should lock in platform advantages. Given the national misallocation hangover, capital expenditure must be targeted or the investment risks adding ‘good megawatts in the wrong place’.”

On the AI front, China initially lagged behind the US, especially after the 2022 emergence of ChatGPT, which reset global AI paradigms. Chinese firms had to rapidly align with large language model (LLM) frameworks but companies like DeepSeek and Alibaba’s Qwen have since narrowed the gap with models that surpass American models in various metrics on global rankings. DeepSeek is now at its third iteration and very much improved, while Alibaba’s Qwen is not far behind, both being placed at the top on reasoning, coding, and math in crowdsourced rankings. The staggering cost-efficiency of the models is due to “clever training recipes”, in Lee’s opinion.

The hardware hurdle

But despite the strong software and talent advantages, China faces a critical hardware bottleneck - no domestic entity has assembled a GPU cluster exceeding 100,000 units, compared to U.S. projects like xAI’s Stargate, which boasts 400,000. This severely limits large-scale model training.

This also explains why China reportedly needs more than 115,000 Nvidia chips for its massive AI data center campuses that are currently being constructed in the western regions. But with the Chinese government recently discouraging its companies from using Nvidia chips, it’s very likely that these data centers would use more local AI chips until the situation stabilizes.

A China-based analyst says that while domestic players like Huawei are making significant progress, challenges remain in production capacity and full-stack ecosystem maturity. The shift towards domestic chips is framed not as a forced decoupling but as a natural commercial and strategic preference, provided local alternatives are

competitive in performance, cost, and scalability.

NUS professor Lee reckons that domestic chips such as Huawei Ascend is good enough for many training and inference tasks and hence would likely anchor sovereign workloads while heterogeneous fleets and a CUDA-lite pathway would dominate in the commercial sector. Gray supply would decline due to tighter enforcement but won’t completely disappear from the scene in the near future. According to Lee, there is apparently a repair ecosystem in Shenzhen that refurbishes up to hundreds of smuggled AI GPUs per month at CN¥10K– CN¥20K (US$1,400 –US$2,800) per unit!

Contrary to what the market thinks about the impact of US chip export restrictions to China, the NUS academic feels that it’s actually a positive thing for China in the longterm as it encourages self-reliance, an independent AI ecosystem and supply-chain sovereignty. In the near term, some pain can be expected and progress will slow down as developers get weaned off CUDA and Nvidia. But the pragmatic ones will adopt a dual-track approach, using Ascend for sensitive work and Nvidia where allowed.

What lies ahead

The NUS professor believes that AI utilization will gradually increase amid higher occupancy of AI-grade data centers while generic builds will slow down due to more selective approval process. There will be selective west–east interconnect investments, as well as clearer design convergence meaning liquid-first, heat-reuse-bydesign, bigger MV blocks, and fabricaware campus planning. The national marketplace will begin to clear some stranded capacity although latency and fabric limits remain.

On the other hand, the Chinabased analyst feels that the sector will see continuing intense competition with limited recovery prospects in the next six – 12 months due to ongoing project deliveries and sluggish demand. But long-term, with policy corrections underway, including construction bans in non-core regions, the industry will be on a path towards long-term stabilization, albeit slowly.

Beneath the waves: China’s bold bet on underwater data centers

Underwater data centers come with many advantages but can they replace land-based data centers?

In June, China launched its first offshore wind-powered underwater data center (UDC) off the coast of Shanghai. With Hainan having already witnessed the successful commercial deployment of a UDC, what’s new about this project?

The difference this time is that this project, known as the Shanghai Lingang UDC project, is an upgraded 2.0 version powered by an offshore wind farm. According to the developer, Shanghai Hicloud Technology Co., Ltd (Hailanyun), this makes the facility “greener and more commercially competitive.” Hailanyun is a leading Chinese UDC specialist that also built the Hainan UDC.

A UDC is a data center that keeps all

the servers and other related facilities in a sealed pressure-resistant chamber, which is then lowered underwater onto the seabed or on a platform close to shore. Power supply and internet connections are relayed through submarine composite cables.

Su Yang, general manager of Hicloud told Xinhua, China’s official news agency that the Lingang project design draws on their experience with their Hainan project which to date, had “zero server failure and no on-site maintenance [was] needed”.

The Hainan UDC has been hailed as a success in China – it claims to have current computing capacity equivalent to “30,000 high-end gaming PCs working simultaneously, completing in one second what would take a standard computer a year to accomplish”. An

additional module allows it to handle 7,000 DeepSeek queries per second, apparently.

The history of underwater data centers has leant more towards a rather positive outlook despite some criticisms. Microsoft, credited with pioneering Project Natick, the first underwater data center in the world, has effectively confirmed that it is abandoning the project. It has however somewhat cryptically said it “worked well” and that it is “logistically, environmentally, and economically practical.”

There are several other similar planned installations by American startups, namely Subsea Cloud and NetworkOcean, but nothing concrete has come out of those yet. Though some observers have sounded scepticism on the project, Hicloud is unfazed and is planning to allocate the two-phase project with an initial investment of 1.6 billion yuan (about US$ 222.7 million).

Proof-of-concept

The Lingang UDC’s first phase, a 2.3 MW demonstration facility to be operational in September, is more of a proof-of-concept project serving as a real-time laboratory for monitoring the various impacts of using off-shore wind energy to power a UDC. If successful, it will scale to the second phase with capacity of 24 MW, and a power usage effectiveness (PUE) below 1.15 with over 97 per cent of its power generated from offshore wind farms.

Underwater DC

The history of underwater data centers has leant more towards a rather positive outlook despite some criticisms.

According to Hicloud, the system, consisting of 198 server racks, can deliver enough computing power to train a large language model in just one day.

Anchored 10 kms off Shanghai, in the Lin-gang Special Area of China (Shanghai) Pilot Free Trade Zone, the facility leverages on the cooling effect of sea water to cool its servers. With cooling being practically free, this shaves off 30-40 percent of its electricity bill compared to its land-based counterparts. Cooling typically accounts for 30 – 40 per cent of total data center power consumption.

Additionally, the facility relies almost 97 per cent on offshore wind power, which represents a massive savings on power consumption. It operates a closed-loop water circulation system where the seawater is channelled through radiator-equipped racks. Hence, it does not require freshwater resources or traditional chillers. This again cuts power consumption as well as carbon emissions.

Another advantage is that the sealed vessel can easily be filled with nitrogen gas, after purging the oxygen. With zero oxygen atmosphere,

equipment can last longer hence server failure is minimized or practically zero. Microsoft has proven this with Project Natick although it’s only a two-year one module experiment.

As noted by an observer, most IT equipment today probably has only a lifespan of 3- 5 years due to technical obsolescence hence it doesn’t matter even if the equipment can last 10 or 20 years. Moreover, the higher operations, maintenance, and deployment costs at sea would offset some of the savings.

If there is server failure, it just means having to use a floating barge with an appropriate crane system to lift the capsule out to deal with the failure. Or, in some instances, it might require marine salvage teams and specialized vessels.

Adverse impacts

More critically though, critics have pointed out that heat emissions from UDCs would cause lasting damage to the marine ecosystems such as bleached coral reefs, killing off some species of marine life and local ecosystems, among others. Moreover, the long-term effect is still unknown. Some environmentalists are adamant that if more data centers are placed underwater, the cumulative impact, no matter how small, would contribute to global warming.

“Even if the vast ocean can dissipate this heat efficiently, and even if the temperature change is minimal and localised, it would still have an impact because the heat has to go somewhere,” notes a green activist.

“It could even bring invasive species and affect ecological balance,” says James Rix, JLL’s Head of Data Centre and Industrial, Malaysia and Indonesia. Rix also ponders whether it’s the right thing to do “even if it works”, due to the adverse impact on the environment.

Another issue is the potential outages should a natural disaster occur. “When it comes to disaster recovery, there is no emergency power generation for underwater facilities, such as an underwater generator,” he reflects.

Hicloud has countered with trial data showing the heat emitted has never exceeded one-degree Celsius increase, the threshold above which it is believed there will be some impact on the surrounding marine

life. Other issues raised include noise pollution, electromagnetic interference, biofouling and the effect of corrosion.

Swim or sink?

If successful, the implications are massive as it could become a model for the next generation of data centers with sustainability and high-performance computing deeply embedded. It has the potential to solve the most pressing issues currently facing land-based counterparts: scarcity of land, power and water, plus the problem of massive heat generated by high-density servers.

Mainstream adoption could even give a boost to current coastal data center hubs constrained by land shortage, such as Japan, Hong Kong and Singapore. This could place China at the forefront of green computing infrastructure, although countries like South Korea and Japan have also announced plans to explore UDCs.

The fact is, while early results look promising, long-term, differentlylocated performance still needs more evidence. And there is the serious environmental impact to consider. A more likely scenario, even if the Lingang UDC proves successful, would be UDCs operating alongside land data centers. Challenges remain: marine regulations, environmental laws, longterm maintenance costs, security, and scalability. Independent oversight will be essential whether or not UDCs become mainstream.

Divers around an underwater capsule
Deploying DC underwater

Unearthing the need for quake-resistant designs

Johor’s recent earthquake might unearth a real need for low-risk seismic regions to incorporate quakeresistance into their data center design.

Arecent 4.1 quake with its epicentre in Segamat, a small town in Johor, could portend the need to prepare for the next one. Because who knows where the next epicenter will be, when it will strike, or how intense it will be? Experts have already warned that a 5.0 or higher magnitude earthquake can happen any time in West Malaysia. That level would cause serious damage to property and even casualties. Sedenak in Kulai, which lies 130 kilometers south of the Segamat epicentre, is the fastest-growing region for data center buildouts in Southeast Asia. An epicentre close enough would have catastrophic consequences.

As PS Lee, Professor and Head of Mechanical Engineering, National University of Singapore, said: “In areas like Singapore and Malaysia that have little seismic activities, data centers should still incorporate safety measures. This is because tremors that originate from earthquakes in Sumatra can often be felt in both countries.”

Following the Segamat quake, it is becoming imperative for missioncritical buildings like data centers that house millions or even billions of dollars worth of compute equipment

to consider protecting their buildings and assets from earthquake damage, regardless of whether the jurisdictions mandate it.

The bare minimum

The bare minimum would involve the following, according to a booklet published by the Institute for Catastrophic Loss Reduction (ICLR) in May 2024. Most of the recommendations are fairly basic, such as ensuring most equipment is seismically rated and anchored, as well as braced to prevent sideways swaying during an earthquake.

Relevant items would include computer racks; all mechanical, electrical, and plumbing (MEP) equipment; raised floors; suspended pipes and HVAC equipment. In particular, battery racks should be strongly anchored, braced against sidesway in both directions, with restraint around all sides and foam spacers between batteries.

During and after an earthquake, there will be extended electricity failure, so emergency generators and an uninterruptible power supply (UPS) should be seismically installed with two weeks of refuelling planned. As water supply will also be interrupted,

consider closed-loop cooling rather than evaporative cooling.

For even better preparedness, Lee advocates several additional measures such as specifying seismic-qualified kit for critical plant; preparing post-event inspection and restarting checklists and Earthquake Early Warning (EEW)informed SOPs where accessible; and conducting periodic drills.

Different requirements for highrisk regions

It’s a different story however in earthquake-prone countries like the Philippines and Indonesia. There, earthquake-resistant design should be mandatory, especially near mapped faults, advises Lee. A practical stack, according to him, would include siting the data center at a safe place, away from rupture, liquefaction, and tsunami exposure; elevating critical floors; and diversifying power and fiber corridors.

For the structure, base isolation for critical halls should be employed. Base isolation is a technique that separates a structure from its foundation using flexible bearings, such as rubber and steel pads, to absorb earthquake shock and reduce swaying. In addition, use buckling-restrained braces (BRBs) or self-centering systems to control drift. Rack isolation should be applied for mission critical rows while for nonstructural items, ensure full anchorage and bracing, large-stroke loops, seismic qualification for plant as well as protecting day-tanks and bulk storage against slosh.

To ensure network operations continuation, a data center should employ multi-region architecture, EEW-

By Jan Yong
An earthquake fissure
Batterred by quake

triggered orchestration, and rehearsed failovers. These are the basics that could determine whether a data center emerges from an earthquake with little damage or rendered completely unusable.

Can AI help?

With artificial intelligence being applied widely now in many industrial spaces, can AI or digital twins help to predict when and where the next earthquake will occur, and its intensity? Yes, according to the NUS professor, but only as a decision support.

Real-time sensor networks feed data into digital models of the building - digital twins - that reflect actual structural behaviour. These models help assess equipment-level movement, simulate scenarios before construction, and help make decisions based on how quickly operations can resume after an event.

In addition, AI could help choose and tune the design, and still meet code requirements as well as keep peer review in the loop. Underlying that, AI could be instructed to prioritise capex by avoiding downtime and SLA risk reduction. In short, AI and digital twins could help significantly during the design stage by simulating all the options in order to help make the best decision.

Beyond basics

Beyond the basic methods such as base isolation, tie-downs, seismic anchorage, reinforced walls, flexible materials, and floating piles, NUS’ Lee came up with a list of some other methods that could be applied,

To reduce building shaking during earthquakes, tuned viscous or mass dampers, and viscoelastic couplers are typically used.

sometimes concurrently.

In low-damage structural systems, one can apply self-centering rocking frames or walls. Typically posttensioned, these allow structures to rock during an earthquake and then return to their original position through the use of high-strength tendons. Energydissipating fuses are incorporated – these help reduce residual drift, supporting faster re-occupancy postearthquake. Supporting these are buckling-restrained braces which help to dissipate energy under both tension and compression, effectively controlling lateral drift.

To reduce building shaking during earthquakes, tuned viscous or mass dampers, and viscoelastic couplers are typically used. These systems work quietly within the structure to minimize shaking and are often paired with base isolation or added stiffness to further reduce movement during seismic events. Where equipment operation is critical, isolation strategies such as racklevel isolators - like ball-and-cone or rolling pendulum units - are applied for the most sensitive server racks. Another option is to install isolated raised-floor slabs that protect entire server areas from floor movement. These can be done at a cheaper price than isolating the whole building.

At the ground level, soil instability is crucial especially protecting it from the risk of liquefaction. Techniques such as deep soil mixing, stone columns, and compaction grouting help stabilize weak soils. Foundation systems like pile-rafts are designed to spread loads while reinforcing retaining walls and utility pathways. When seismic isolation is employed, expansion joints or moats are built with enough clearance to overcome large movements, while utilities such as pipes, electrical bus ducts, and fiber cables are endowed with flexible connections to avoid breaking at the isolation boundary.

By linking into earthquake early

warning (EEW) systems, buildings can respond automatically seconds before the earthquake movement arrives. Actions like shutting off fuel and water lines, starting backup generators, moving elevators to the nearest floor, or shifting workloads in IT systems, could make the difference between survival or non-survival.

Lee adds that the winning stack for high-hazard and near-fault metro locations, would be to incorporate base isolation with low-damage lateral system plus non-structural measures. The data center also has the option to apply rack isolation for the most critical bays.

Examples of how successful methods saved the day

The methods employed as described earlier have been proven to work over the years, NUS’ Lee says. Some examples include the following:

• Base-isolated facilities in Japan: Friction-pendulum or laminated-rubber isolation, often with supplemental oil dampers, have kept white spaces intact through major earthquakes (including the 2011 M9.1), with building displacements staying within moat allowances and no significant IT damage reported.

• Retrofit cases in California: Base-isolated, frictionpendulum retrofits of mission-critical buildings have seen benign in-building accelerations during real events and preserved operability.

• Rack-level isolation deployments (Japan/US): Balland-cone platforms protecting long rows of cabinets showed markedly reduced rack accelerations and prevented topple or failure in documented earthquakes.

• Advanced fab/DC campuses in Taiwan: Post-1999 programmes combined equipment anchorage, foundation and ground improvements, and operational protocols; later M7+ events produced limited equipment damage and rapid restart.

Where AI workloads meet sustainable design

Here’s what a data center built for AI and sustainability looks like.

What if you could design a modern data center from scratch? In an era of explosive infrastructure growth and diverse workloads spanning enterprises, cloud platforms, and AI applications, what would that facility look like?

Founded in 2021, Empyrion Digital has expanded rapidly with the announcement of new developments across South Korea, Japan, Taiwan, and Thailand. The company’s newly launched KR1 Gangnam Data Center in South Korea demonstrates how modern facilities can meet evolving standards for scalability, performance, and sustainability.

Inside South Korea’s newest AIready facility

Empyrion Digital’s greenfield KR1 Gangnam Data Center in South Korea officially opened in July this year. The

29.4MW facility was purpose-built to support hyperscalers and enterprises requiring low-latency, high-density infrastructure for AI and cloud computing.

KR1 marks the first data center built in Gangnam – South Korea’s economic hub – in over a decade. Several factors created this gap: an informal moratorium targeting data centers and competition from other industries for limited power from a grid operating near capacity. Gangnam’s prohibitive land costs also played a part in discouraging data centers there.

The nine-story facility offers 30,714 square meters of colocation space. Floors are designed with 15 kN/m² loading capacity to support liquid cooling equipment necessary for the latest GPU systems for AI workloads. Crucially, it was designed to withstand magnitude 7.0 earthquakes, critical given South Korea’s dozens of annual tremors and occasional moderate quakes.

KR1’s AI readiness shows in its eightmeter ceiling heights, providing ample space for liquid cooling pipework. According to Empyrion Digital, selected data halls are already fitted with direct-to-chip (DTC) cooling piping, while others are ready for conversion as needed. Standard IT power runs 10kW per rack, with higher densities available on request.

Modular infrastructure allows integration with rear-door heat exchangers and direct-to-chip liquid cooling systems. Taken together, KR1 is a modern data center that is strategically located for hyperscalers, content delivery networks, and enterprise businesses.

Sustainable by design

KR1’s environmental approach extends beyond standard efficiency metrics. Technical specifications aside, the facility incorporates various green features throughout. This includes the use of eco-friendly construction materials, rainwater management systems, and rooftop solar panels.

Within data halls, fan wall systems, which are recognized for being energy efficient, handle air cooling requirements. Most notably, the facility uses StatePoint Liquid Cooling (SPLC) technology for its cooling infrastructure. The SPLC was specifically developed for sustainable, high-performance data center cooling.

SPLC delivers measurable improvements in thermal performance, water efficiency, and energy consumption. The system combines a liquid-to-air membrane exchanger with a closed-loop design. Water evaporates through a membrane separation layer to produce cooling without excessive mechanical intervention. This approach minimizes both energy requirements and water consumption compared to traditional cooling methods.

The Gangnam facility demonstrates what a modern data center designed from scratch looks like today. Flexibility, sustainability, and AI readiness aren’t add-on features but fundamental design principles. Empyrion Digital’s KR1 shows that meeting infrastructure demands while maintaining environmental responsibility requires integrating both priorities from the ground up, not treating them as competing goals.

Photo to use: Empyrion Digital

Is Your Business Ready to Take the Spotlight? Get in touch with us today at media@w.media

Scan to watch our exclusive interviews with Industry leaders.

With a readership of top decision-makers across APAC, we’ll ensure your message reaches the right audience. Our Offerings Include:

• Magazine – Online & Offline

• Interviews - In person & Virtual

• Editorial content

• Newsletter Features

• Digital Ad Campaigns

• Annual APAC Cloud and Datacenter Awards

Our Community at a Glance:

• 80,000+ Subscribers across APAC

• 70,000+ Monthly Website Visits

• 33,000+ Social Media Followers

• 16,000+ Industry Decision-Makers Engaged

Here are three ways you can get involved:

1.

Nomination Submission (Opens until July 31st)

You can nominate individuals, teams and initiatives pushing boundaries under 3 main themes: Projects, Planet, People and 16 categories. Multiple entries per organization are welcome.

Scan to submit (free entry) and give your work the recognition it deserves!

2.

Sponsorship Opportunities

We are welcoming sponsors with limited slots available!

Don’t miss this opportunity to be in the spotlight throughout our award promotion and at our highly anticipated Gala Ceremony, where 200+ C-level executives from top companies will join.

3.

Early Bird: To guarantee a seat at our black-tie ceremony event, grab a ticket at a discounted price now!

Get more information via: https://w.media/awards/#tickets Contact us: awards@w.media for any further inquiries

Here’s what transpired at KRCDC 2025!

If you missed w.media’s Korea Cloud & Datacenter Convention (KRCDC) 2025, here’s the lowdown on all that transpired.

Over 1,000 digital infrastructure and technology professionals including C-suite executives, business leaders, key buyers, architects, engineers and consultants attended the KRCDC this year.

The atmosphere was abuzz with excitement, as w.media brought together some of the biggest names of South Korea’s digital infrastructure industry to discuss some of the most pressing issues and concerns facing the digital infrastructure industry in South Korea. The day-long event was held at COEX, Seoul on September 19 2025. Check out our stellar line-up of speakers!

Topics of our power-packed panel discussions ranged from data center investment trends across Northeast Asia, multi-tenant data center design, modular data centers, energy efficiency and sustainability challenges.

“It was a privilege to join industry leaders at KRCDC 2025 for a timely discussion on Korea’s digital future. Our panel highlighted how the AI-era shift from air-cooling to advanced liquid and hybrid systems is redefining efficiency and sustainability. I argued that real progress means building flexible, future-ready facilities and adopting sustainability measures that go beyond PUE alone,” said Ben Bourdeau, Global Chief Technology Officer, AGP Data Centres. “Korea’s land constraints and emerging renewables market will push toward creative reuse of legacy assets.

That challenge could drive the kind of practical innovation needed as nextgeneration energy solutions mature.”

Another attendee Mozan Totani, SVP, ADA Infrastructure, said, “Through the conference and the panel discussions, I was pleased to see that people in the data center industry are extremely passionate about improving energy efficiency and minimizing environmental impact, even as AI is set to bring significant productivity gains that will improve our lives. We also discussed how we can work together to support sustainability. It was truly a great conference.”

KRCDC also saw a series of technology presentations on subjects ranging from AI data center infrastructure, to design and build innovations, to liquid cooling for AI, to flow solutions for data centers, and much more.

Here are a few pictures from KRCDC 2025.

‘China has caught up in AI’

The US AI Action Plan unveiled by President Donald Trump in July basically opens up exports of US-made AI chips to all US allies in a bid for global AI dominance.

The US strategy is simple: disperse US-made advanced AI chips and software to US allies as fast as possible so that China will not be able to catch up.

But guess what? China has apparently caught up, to some extent.

In her weekly newsletter in July, Saanya Ojha, Partner at Bain Capital Ventures, observed that China is leading in open source AI models. “This month, China shipped the two best open-source LLMs released to date,” she said.

She pointed to Moonshot’s Kimi 2 with 400B parameters and 2M-token context, as well as the smaller and faster Qwen3 by Alibaba. Ojha added, “They’re better than anything the West has open-sourced. And China’s AI strategy is diverging fast.

According to her, China isn’t finetuning Western models or limiting themselves to chatbots. It’s building its own from scratch and Chinese companies are embedding AI into the superapps that hundreds of millions of people already use every day. And China is building apps that work and deploying them at scale with limited access to advanced chips, heavy regulation, and intense state-enterprise coordination.

“While the West chases AGI, China is quietly operationalizing AI across logistics, finance, education, and

government. If you’re only tracking the western labs, you’re missing half the map. And the half you’re missing is moving fast,” she wrote.

Nvidia’s CEO Jensen Huang has earlier said that AI scientists in China are world-class and overall, China is doing “fantastic” in the AI market, with models from Chinese-based companies – such as DeepSeek and Manus –emerging as powerful challengers to systems designed in the US.

Trump must have been apprised of this, hence the dramatic change in tactics. If restrictions don’t work, then how about deluging the world with US-made AI chips and software? If most of the world is tethered to US-made AI ecosystems, the US wins.

The AI race isn’t over yet, evidently.

According to Ojha, three players are sprinting to build the U.S. backbone of AI, namely:

• OpenAI is orchestrating a hyperscaler stack through Oracle and CoreWeave, trying to control the AI supply chain without owning it. It’s scaling through partnerships, not property.

• xAI is building everything inhouse – chips, data centers, power – on a mountain of debt, betting that speed and vertical

control outweigh risk. Meta is quietly out-building both – committing over US$ 100 billion in capex and constructing the 5 GW Hyperion campus. Its models run inside search, feed, Ray-Bans, and WhatsApp –embedding intelligence at the edge.

And not to mention Meta’s multimillion-dollar Superintelligence Labs team which has the ability to build ChatGPT from scratch. Chief scientist of the team, Shengjia Zhao, was one of the co-creators of ChatGPT.

Ultimately perhaps, it doesn’t matter who wins the AI race as long as the whole world receives the benefits of AI. As it is, half the world’s population or more, are not even connected to the internet yet.

Building data centers when silicon moves faster than steel

Silicon refreshes every 12 months. Data centers are built to last 20 years. This fundamental mismatch threatens to leave operators with billions in stranded assets as AI acceleration rewrites the rules of digital infrastructure.

Tony Grayson knows how to navigate mission-critical environments where failure isn’t an option. The former nuclear submarine commander who went on to lead hyperscale data center projects at AWS, Meta and Oracle, now serves as President and General Manager at Northstar Federal & Northstar Enterprise & Defense.

His verdict on the industry’s biggest challenge is blunt: traditional data center design is already obsolete. The solution, Grayson argues, requires a fundamental rethink of how data centers are conceived and built – driven by the rise of distributed reinforcement learning (RL).

Modularity as the new default

“From my experience at Northstar and scaling EdgePoint Systems, agility must be embedded at every layer to keep pace with AI’s rapid change,” Grayson said. For him, modular data centers (MDCs) are not a niche solution but the new default. Compared to hyperscale builds, which can take 18-24 months and cost US$12 to US$15 million per megawatt, MDCs typically deploy in just 3-9 months at around US$7 to US$9 million per megawatt in

the US and Australia. “This avoids overprovisioning and stranded capacity from hardware refresh cycles,” he explained.

Prefabrication is a key enabler: MDCs can bypass 6-18 month permitting delays, cut on-site construction timelines by 50-70%, and in many cases avoid full environmental reviews. This opens opportunities for brownfield retrofits or edge deployments near substations, sidestepping grid connection queues such as PJM’s multi-year backlogs in the US. Grayson likens the design philosophy to Lego: “Standardize components for mass customisation while ensuring maintainability.”

The flexibility extends beyond construction. NorthStar’s MDCs support racks from 30 to 132 kilowatts and employ advanced liquid cooling, enabling new silicon generations to be swapped in and out with minimal disruption. According to Grayson, this modular refresh cycle is essential in a world where “silicon moves faster than steel and concrete.”

The silicon paradox

That phrase captures the paradox at the heart of AI infrastructure. With GPUs and accelerators refreshing every 12-24 months, traditional data center lifecycles – once measured in decades

His verdict on the industry’s biggest challenge is blunt: traditional data center design is already obsolete.

– now risk leaving operators with millions of dollars in stranded assets. The challenge is particularly acute as power densities rise: Hopper and Grace Blackwell systems are pushing racks from 800 kilowatts toward 1.5 megawatts, and those racks can weigh twice as much as today’s.

The way forward, Grayson argued, is to decouple infrastructure from any single generation of silicon. “We plan in 18–24 month horizons but model over a five-year lifecycle, factoring in the 20–30% opex savings modularity delivers,” he said. Techniques such as Monte Carlo simulations for silicon price volatility help navigate the uncertainty.

Future architectures add another layer of complexity. While Nvidia currently dominates training, AMD’s MI400X is challenging in inference, and custom silicon such as Groq is optimizing further. Compute Express Link (CXL), which pools memory between CPUs and GPUs, promises more than 30% better performance per watt and 20-30% cost savings compared to traditional GPU setups. “The old data center assumption of 20- to 30-year customer lifecycles is dead,” Grayson said. “AI moves too fast, and you have to design for 3- to 5-year obsolescence risks.”

ROI models that matter

While his focus is technical, Grayson also challenges operators to tie every investment back to revenue fundamentals. “Technology alone doesn’t generate revenue; adaptable infrastructure does.” He argues that metrics such as total cost of ownership per inference, power usage effectiveness (with MDCs delivering under 1.2 versus 1.5+ for legacy sites), and stranded capacity risk should sit at the center of investment decisions.

The numbers are striking: stranded capacity from inflexible builds can reach US$100–500 million, while custom silicon like Groq LPUs can deliver up to 50 times more revenue than H100 equivalents – generating US$15,500 per rack per day compared to around US$310. For a 1MW MDC with Nvidia B200 GPUs specifically, phased payback models suggest potential margins of US$3.4 million over five years, with an internal rate of return above 25%.

To manage uncertainty, Grayson advocates scenario-based ROI analysis. “Always ground everything in revenue fundamentals: what’s your dollar-pertoken or dollar-per-query yield?” he said.

Distributed compute and the Grok shift

If modularity is the hardware response to AI’s acceleration, distributed compute is the architectural shift underpinning the next generation of workloads. Grayson points to Grok 4 as a turning point: “Earlier LLMs focused mostly on pretraining, with only light reinforcement learning from human feedback. Grok 4 used around 100 times more total compute than Grok 2, splitting it equally between pre-training and RL, and delivering state-of-the-art results.”

RL workloads are inherently more parallel and latency-tolerant, allowing them to run on distributed, even heterogeneous hardware. Methods such as GRPO cut inter-node communication costs and reduce the total cost of ownership by up to 20%.

For operators, this points to a hybrid future: hyperscale campuses may still dominate pre-training, but modular edge sites will increasingly host RL and inference, where sovereignty, latency, and cost savings are critical. Research demo INTELLECT-2, a 32B-parameter RL setup, reduced response times by 15% and failed requests by 24% using distributed RL.

Energy futures

Looking further ahead, Grayson sees nuclear power as a transformative force for missioncritical AI deployments. Drawing on his submarine command background and work advising on small modular reactors (SMRs), he believes microreactors could power modular data centers by 2035. “SMRs and microreactors could power MDCs by 2035, offering baseload energy for sovereign, mission-critical deployments,” he said.

Until then, a hybrid energy mix will prevail. Renewables can already supply more than 40% of MDC energy use, with batteries and microgrids stabilizing supply. Closed-loop liquid cooling can recycle 90–95% of water, mitigating AI’s projected consumption of 4.2–6.6 billion cubic meters globally by 2027. Prefabrication and modularity can cut embodied carbon by 20–30% compared to traditional builds. “Sustainability metrics must go beyond PUE,” Grayson argues, “because embodied carbon and water will be the real constraints.”

Designing for agility

For Grayson, the lesson from both hyperscale and military experience is that infrastructure must never become a constraint on innovation. The rapid cycles of silicon, the shift toward distributed reinforcement learning, the rise of custom accelerators, and the uncertainties of energy all point in the same direction: agility is the core value proposition.

“Each technology wave – GPUs, distributed RL, quantum – demands different infrastructure,” he said. “The only way to stay ahead is to build for refresh, not for permanence.”

Rethinking Enterprise Infrastructure for the AI Era

AI promises to revolutionize enterprise operations, from automating complex decision-making to enabling predictive maintenance that prevents costly downtime.

Unsurprisingly, enterprises are racing to deploy AI across their organizations to deliver enhanced customer experience, improve operational efficiency, and gain a competitive advantage.

Yet infrastructure leaders, facing this unprecedented shift, often rely on approaches from traditional IT deployments. While these approaches seem reasonable, understanding AI’s actual infrastructure requirements early can mean the difference between smooth implementation and expensive delays when reality collides with expectations.

When assumptions meet reality

Most organizations naturally apply lessons from decades of IT deployments, viewing AI as an evolution of existing compute needs. Where exactly do these expectations fall short? The disconnect appears immediately in how enterprises calculate power capacity. Traditionally, data centers built for 10kW racks typically run at four or five kW, suggesting ample headroom exists. Indeed, the annual Uptime Institute survey found that the average server rack density in 2024 is just 8kW.

Yet AI clusters operate differently, running continuously at peak capacity rather than spiking occasionally like

traditional servers. This fundamental shift transforms infrastructure planning in unexpected ways by eliminating the buffer zones that IT teams relied on for capacity management. Server distribution offers another example. Traditional infrastructure deployments spread servers across racks when capacity issues arise. But GPU clusters function as integrated units. Splitting them is technically possible but can cause unexpected side effects.

Hardware evolution adds another layer. Where CPU roadmaps used to stretch predictably over multiple years, GPU providers are currently refreshing architectures annually. Organizations planning for current specifications often see next-generation hardware shipping

Had they built with AI in mind, these expenses would have been lower and the timeline much shorter.

before construction completes, throwing the best-laid plans askew. These advances cascade throughout facilities. Floor loading requirements double. Existing transformers prove insufficient. Looking ahead, power distribution could well shift towards DC (Direct Current) architecture for efficiency gains that now matter at scale. Supporting AI means rethinking everything from cooling approaches to spatial ratios between power rooms and white space.

The price of hesitation

Faced with rapidly evolving GPU technology and conflicting vendor promises, many enterprises freeze. The thinking is to defer decisions to avoid potentially costly mistakes. But this uncertainty carries a steep price as competitors rush out AI deployments that let them pull ahead.

The retrofit trap compounds these losses. Consider one organization that completed a new data center to traditional enterprise specifications, planning to add AI capabilities later. When demand for AI resulted in an earlier-than-expected upgrade, the estimates for a retrofit shocked company executives. Had they built with AI in mind, these expenses would have been lower and the timeline much shorter.

Even waiting for clarity doesn’t reduce risk, due to how quickly GPU technology continues to evolve. But organizations that start now build institutional knowledge through experience. Those that wait must still climb the same learning curve, just at a higher cost and competitive disadvantage. As the entire industry learns together, enterprises that delay find themselves further behind in understanding these cascading infrastructure impacts.

After decades where data center changes were gradual and predictable, IT professionals tend to underestimate the complexity of AI deployments. And the changes are substantial. For instance, liquid cooling functions quite differently from air cooling – where air systems automatically adjust when loads drop, liquid cooling maintains constant flow regardless of demand. To succeed with AI, enterprises must not just pick up new knowledge but also unlearn established practices.

Closing the AI infrastructure gap

The enterprise AI landscape is entering a decisive phase. Telecom operators are building infrastructure to run AI models that monitor thousands of network nodes, enabling selfhealing networks that resolve issues before customers notice. Hospitals, traditionally limited to computer rooms, are now exploring data center builds as their AI needs expand beyond basic record keeping. These

deployments have moved beyond pilot projects. Organizations that delay AI infrastructure planning risk watching competitors gain transformative capabilities.

The complexity outlined in previous sections raises a fundamental question: should enterprises build this expertise internally or partner with specialists? The infrastructure challenges are real, but they are also solved problems for providers who have already worked through the learning curve. GPU-as-aservice cloud providers offer one path, managing infrastructure complexity while enterprises develop their AI applications – assuming data privacy is not a consideration.

For enterprises that require onpremises infrastructure due to data sensitivity or latency constraints, the market has evolved significantly. Modern infrastructure providers like Vertiv now work directly with GPU manufacturers such as Nvidia to understand the complete ecosystem, not just individual components. This shift towards integrated design

AI Adoption, Powered by Vertiv

eliminates the coordination gaps that created costly retrofits. By staying close to GPU roadmaps and building in adaptability for future hardware generations, providers help enterprises avoid the specification mismatches that plague traditional builds.

Vertiv’s 360AI reference design exemplifies this integrated approach. This reference architecture delivers plug-and-play capability for AI servers, with predetermined space requirements, thermal management, and power distribution. Enterprises receive complete documentation including maintenance protocols and operations manuals. By designing specifically for AI workloads rather than retrofitting general-purpose infrastructure, organizations avoid the costly mistakes detailed earlier while building in flexibility for future hardware generations.

Learn how Vertiv’s 360AI reference design can accelerate your AI infrastructure deployment here or contact our specialists to discuss your specific enterprise AI requirements.

From manufacturing, retail, and FSI, organizations in Asia and elsewhere are turning to the power of AI. Here are some examples of how businesses are using AI and how Vertiv can play a part.

Manufacturing

AI is reshaping manufacturing floors, driving real-time data collection, trend detection, problem-solving, and performance gains. But as AI and automation accelerate, infrastructure must keep pace. Vertiv supports manufacturers with integrated power, thermal, monitoring, and maintenance solutions that ensure resilience and efficiency.

Global AI software spending in retail is set to rise 15.8% in 2024 to $7.8 billion, reaching $12.5 billion by 2027, according to Gartner. But aging infrastructure – from power and cooling to networks and servers, cannot keep pace with AI workloads. Vertiv helps retailers modernize with end-to-end solutions that unlock AI’s full potential.

Banking, financial services, and insurance organizations that can harness AI faster and more effectively are likely to dominate in the coming years. But the complexity is too great for any one company to manage alone. Vertiv supports this journey with expertise in power distribution, cooling, advanced monitoring, and services.

Gearing up to fete digital infrastructure’s aces

w.media Southeast Asia & Northeast Asia Awards

Ceremony and Gala Night promises to be a spectacular night to remember.

The much-anticipated w.media 2025 Southeast Asia & Northeast Asia Awards Ceremony and Gala Night will see many more winners than the previous four iterations. This year, we have introduced seven new awards to take into account the fastevolving nature of the industry.

With more hyperscalers entering the scene and more innovations

emerging, we felt that we needed to give due recognition to them. The leading companies and those widely acknowledged by the industry, drive change and inspire the rest hence collectively raising the bar in this field.

Innovation is the lifeblood of the digital infrastructure industry and this year in particular has seen the rate of innovation accelerating like never before due to AI demands. Hence, our new categories reflect that:

Under Projects, we have added new awards for Innovation in Data Center Engineering for GPUaaS, Innovation in Internet Exchange (IX) Deployment and Innovation in Subsea Network Engineering.

For sustainability efforts under Planet, the new awards are Innovation in Energy Optimization and Sustainability in Operations.

By collaborating, we push the envelope and collectively scale greater heights amid more challenges in the industry.

In the People category, we would like to honor Hyperscale Infrastructure Leaders and Strategic Network Infrastructure Leaders.

“This year’s w.media SEA & NEA

Awards is more competitive than ever before, with an extraordinary 113 nominations in the People category, 45 in Planet, and 51 in Projects. The sheer breadth and quality of entries across all three pillars showcase the region’s unwavering drive to lead, innovate, and set new benchmarks in the digital infrastructure industry,” said Vincent Liew, Co-founder of w.media.

For the Award’s 5th anniversary, we are championing 3Ps for the Digital Infrastructure industry: Planet, Project, and People.

The full list of awards is as follows:

• Projects: Data Center Design & Build; Digital Technology Inside the Data Center; Innovation in Data Center Engineering for GPUaaS; Innovation in Internet Exchange (IX) Deployment; and Innovation in Subsea Network

Engineering

• Planet: Sustainability in Operations; Sustainability in Design & Build; Innovation in Energy Efficiency; Innovation in Energy Optimization; Innovation in Data Center Cooling

• People: Data Center Design and Engineering Team; Data Center Operations Team; Data Center Market Intelligence Team; Hyperscale Infrastructure Leaders; Strategic Network Infrastructure Leaders

There will be a total of 35 awardees picked from the categories above.

This year’s record number of entries has made judging particularly challenging. Our distinguished panel of judges, reputable leaders from around the globe, faced difficult decisions as all submitted nominations demonstrated outstanding merit.

We can’t say how long it took them to decide; what we can say is that they came out perspiring and probably still wondering if they had made the right choices when it’s a case of too close to call between two competitors. In the end, we believe they did their best and we will celebrate the achievements of the winners together in a glittering black-tie event to be held at W Hotel, Sentosa Cove.

Besides recognizing individual and team accomplishments, the event also provides unmatched networking opportunities where a sense of community can be fostered.

Come December 5, we will bring together 150-plus industry leaders and innovators for a night of celebration, connection, and recognition of the region’s most outstanding innovators and leaders in the digital infrastructure industry.

Beyond recognizing individual and team accomplishments, the event fosters community through unmatched networking opportunities. By building these connections and collaborating rather than working in silos, we can push boundaries and scale greater heights together - an essential approach in today’s increasingly challenging environment.

Join us in Mumbai for CDC and South Asia Awards

On November 7, w.media is coming back with the fifth edition of Mumbai Cloud & Datacenter Convention (CDC) and South Asia Awards.

The day-long event will be held at Hotel Sahara Star, Mumbai, and will bring together industry experts and thought leaders, including C-level executives, digital infrastructure professionals including architects, engineers and consultants (AECs), key buyers, decision makers, data center owners and operators.

The Convention will culminate in a glittering awards ceremony where w.media will honour excellence and expertise of data center providers and industry professionals from South Asia across three key areas: People, Planet and Projects.

What makes Mumbai special?

As the largest data center market in India, Mumbai and its adjacent areas like Navi Mumbai, and the BhiwandiTaloja and Thane-Belapur industrial belts, account for half of the country’s data center capacity.

A recent report by Cushman & Wakefield has listed Mumbai as a

“powerhouse” data center market alongside Tokyo, Beijing, Johor, Sydney and Shanghai. The report titled Asia Pacific Data Center H1 2025 Update finds that Mumbai maintains its position as “India’s largest and most dynamic data center market, demonstrating sustained momentum in both operational capacity and development activity.” It further finds that over the past six months, the city added approximately 52MW to its live data center stock. “With around 180MW of the 337MW currently under construction slated for completion within the next six months, Mumbai’s total operational capacity is projected to reach nearly 800MW by the end of 2025 – provided developments proceed as scheduled.”

Meanwhile, a recent report by Knight Frank, has named Mumbai as a “Momentum Market” showcasing how it is not only the largest data center market in India, it is also still growing, with new facilities being built not only in the main city, but also in its suburbs and adjacent areas. Knight Frank’s Global Data Centers Report published earlier this year finds that large-scale colocation deals are driving the expansion of Mumbai’s availability

zones (AZs).

This is in line with findings of other prominent industry watchers as well. For example, CBRE’s Asia Pacific Data Centers Trends & Opportunities Report 2025 found that “expansion by foreign hyperscale cloud companies and investors is fueling supply growth in Mumbai,” and that as of end 2024, live IT capacity in Mumbai stood at 667 MW with another 635 MW under construction. Similarly, JLL’s report titled India Data Center Market Dynamics also found that there was a sharp 51 percent year-on-year rise in demand in H2 2024 of about 122 MW across India, which was met with supply. Mumbai accounted for nearly half the supply during this period.

Stalwarts comprise Mumbai CDC Advisory Board

We have a veritable constellation of some of the brightest stars of the industry guiding us as our advisory board members. These include Sridhar Pinnapureddy (Founder & Chairman, CtrlS Datacenters & Cloud4C), Sharad Agarwal (CEO, Sify Infinit Spaces Ltd.), Surajit Chatterjee (MD, CapitaLand Investment, Data Centre Group), Vivek Dahiya (VP, Site Selection & Acquisitions, APAC, Vantage Data Centers), Syed Mohamed Beary (Founder & Chairman Bearys Group), and Sujit Panda (CEO, BDx Datacenters).

Look who’s coming to Mumbai CDC

We have a stellar line-up of speakers who will engage in several powerpacked panel discussions, and share their ideas on important subjects such as the impact of Artificial Intelligence (AI), GPU-as-a-Service (GPUaaS), Liquid Cooling, evolution of data center technology and infrastructure, increased automation, and much more.

Our speakers include Sreejith G (Sr. VP - Operations, ST Telemedia Global Data Centers, India) Sudhir Wattal (VP, Projects & Delivery, ST Telemedia Global Data Centers, India), Narendra Sen (Founder & CEO, RackBank), Jagat Ram (Head - Datacenter, Operation & Delivery, L&T), as well as industry experts like Prashant Tiwari (DirectorIndia, Sudlows Consulting), NK Singh (CEO, Datacenter Guru), NK Jain (Founder NK Jain Consulting Engineers) among others.

“Every year, we aim to deliver something bigger and better at Mumbai CDC. This time, we expect at least 1,500 delegates,” says Naveen Lawrence, Managing Director - South Asia & Middle East, w.media. “But it isn’t just about quantity; our quality lies in our carefully curated content. Our panel discussions, fireside chats and

presentations - all enable some of the brightest minds in the industry as well as representatives of relevant regulatory authorities to share their wisdom. We are creating an experience that enables knowledge sharing and networking.”

This year, the Mumbai Cloud & Datacenter Convention (CDC) will also see the second edition of CenterStage, a special breakaway session that will be held in the Expo Hall. It is a series of technology presentations, fireside chats and discussions exclusively curated and hosted by w.media’s Editor-inChief. At CenterStage industry experts come together as friends and fellow professionals to examine the myriad challenges facing the industry today.

CenterStage takes place in a more relaxed environment to enable the free flow of innovative solutions and out-ofthe-box ideas.

We will also feature a modest technology expo that will showcase the latest innovations and futuristic technologies and advancements in the cloud and data center industry.

Adapt or Perish!

As we approach the end of this issue, let’s go back to H.G Wells’ take on adapting to survive. Is “Adapt or perish” indeed, as he says, “nature’s inexorable imperative”?

To answer this question, perhaps we need to revisit the idea of “nature” first - how it applies to data centers, will determine how exactly we would need to adapt. Nature is no longer just about green grass and blue skies in feel-good promotional videos showcasing windmills and solar plants powering data centers. It is the nature of the data center itself that has changed.

From being stark, windowless, singular facilities infamous for being “power guzzlers”, data centers are now increasingly being viewed as key enablers of economic development. They are being envisioned as data center campuses, or “AI factories”, spread across several hectares of land instead of stand-alone bleak facilities. Meanwhile, some data centers are operating out of tents, even as others are being deployed under the sea. Some are being built in shipping containers and powered by small modular reactors, while others are being built on barges powered by floating solar cells. While some are located in and around glitzy metropolitan cities, and cater to banking and financial services, a new breed of smaller data centers is

being built in remote areas, enabling healthcare solutions via IoT devices.

Our data centers must adapt to survive the AI boom, meet ESG goals, process huge workloads, and deliver high-performance computing. They should be able to handle large language models, and embrace automation to optimize processes. We are building, in the present, data centers that need to be “future-ready”. This requires some serious crystal ball gazing. It’s a bit like booking a spot in a fancy-schmancy school for your as-yetunborn child, all the while hoping he won’t later grow into a rambunctious teenager who will wreck your car while on a midnight joyride with classmates who have stolen their fathers’ cigarettes. If you don’t get it right, you will either run up a huge hospital bill or sell your kidney to post your sixteenyear-old’s bail.

The idea of an adaptive data center is a bit like Hermione Granger’s beaded bag from the Harry Potter series. Unlike H.G Wells’s Time Traveller who carefully built a Time Machine, but didn’t really have a strategy to deal with crisis situations in an unknown and unpredictable future, J.K Rowling’s Hermione was a brilliant teenage witch

with foresight. She had enchanted the bag with an Undetectable Extension Charm, so it could fit everything she could ever need - from helpful spellbooks, to clothes, food, and even camping gear! Moreover, her quick thinking helped save Harry and Ron multiple times. Her compassion for elves helped forge a valuable alliance with a non-human race of magical creatures, who eventually played a key role in winning the battle at Hogwarts.

Similarly, when it comes to adaptive data centers, it isn’t just about scale; one must account for function, scope, sustainability and even the actual humans who will interact with the data center over its life cycle. Our digital infrastructure needs to be designed using intelligence and humanity, to truly tap into technology’s “magical” potential. With emerging technologies blurring the line between fact and fiction, we must concede that we are no longer making soup, as we stir together ingredients in the witch’s cauldron; we are cooking up a particularly potent kind of transfiguration spell - one that is enabling digital transformation. (Professor McGonagall would be proud!)

Home of Data

Retro-Fit Ready

Retrofit Your Data Centre with Liquid Cooling Solutions from Tate

The compute demands of AI, GPUs, and accelerated workloads are outpacing your infrastructure cycle. Tate’s liquid cooling manifolds make it easy to upgrade existing air-cooled data centres, allowing you to support increased densities without rethinking your architecture.

• Integrates into the hot aisle containment system

• Global production with local support

• Suitable for both new build and retrofit

AI Solutions, Fast Tracked.

Scan here to learn more

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.