Skip to main content

DCNN Edition 1 2026

Page 1


Optimise your data centre for AI.

From infrastructure design to cooling and power, we help you overcome the challenges of AI.

Visit our booth D140 at Data Centre World London on 4-5 March to discover more.

DCNN is the total solution at the heart of the data centre industry

Scan the QR code to explore our full solutions por tfolio and technical resources

elevate excel-networking com | elevate@excel-networking com |

com/company/elevate-future-faster

WELCOME TO DCNN EDITION 1 | 2026

Welcome to the first edition of DCNN for 2026! Already, things have gotten off to a predictably busy and explosive start in the world of data centres. Just in the UK alone, we have seen a staggering number of construction projects being both completed and announced, yet also that a proposal for a data centre in Edinburgh has recently been rejected. This starkly highlights what has become characteristic of the nature of this industry in recent times: that of a sector in constant flux amongst unrelenting and exponential growth.

One piece of exciting news, which we have covered extensively in this edition, is the anticipation surrounding this year’s Data Centre World, taking place in London on 4–5 March.

CONTACT US

GROUP EDITOR: SIMON ROWLEY

T: 01634 673163

E: simon@allthingsmedialtd.com

ASSISTANT EDITOR: JOE PECK

T: 01634 673163

E: joe@allthingsmedialtd.com

ADVERTISEMENT MANAGER: NEIL COSHAN

T: 01634 673163

E: neil@allthingsmedialtd.com

SALES DIRECTOR: KELLY BYNE

T: 01634 673163

E: kelly@allthingsmedialtd.com

At the event, DCNN will have our very own stand (J189), to which we invite all our readers to pop by for a chance to speak with the team, share a drink, and to chat about all the opportunities in the magazine.

Lastly, from this year onwards, due to popular demand, we are increasing the frequency of DCNN publications. As such, this will be the first of six, bi-monthly editions for 2026, meaning you will be able to keep up with all the ongoing developments in our sector even more often.

I do hope you enjoy the magazine, Joe

STUDIO: MARK WELLER

T: 01634 673163

E: mark@allthingsmedialtd.com

MANAGING DIRECTOR: IAN KITCHENER

T: 01634 673163

E: ian@allthingsmedialtd.com

CEO: DAVID KITCHENER

T: 01634 673163

E: david@allthingsmedialtd.com

Matthew Baynes of Schneider Electric details how the company will demonstrate its integrated power, cooling, and digital capabilities at Data Centre

Issue

Nathan Charles of OryxAlign warns of how AI-powered ransomware and automated attack tools are reshaping the threat landscape

Products

30 Carsten Ludwig of R&M examines how AI workloads are reshaping network design and why modular, repeatable connectivity frameworks are essential

34 Tim Doiron of Nokia outlines how Managed Optical Fibre Networks (MOFNs) are evolving to help hyperscalers meet soaring interconnect demands

38 Dan Hanson of Supermicro explains how advances in RDMA, congestion control, and traffic engineering have brought ethernet fabrics to parity with InfiniBand for latency-sensitive AI workloads

41 Liam Taylor of MicroCare illuminates how disciplined cleaning processes are critical to tackling the operational impact of fibre contamination

DATA CENTRE ESSENTIALS

44 With retrofit projects and upgrades becoming more common, Dan Evets of ALICE Technologies suggests the adoption of generative scheduling platforms to mitigate the resultant impacts

46 Hans Obermillacher of Panduit outlines why cooling has become a defining architectural constraint in modern data centres

50 As Southeast Asia accelerates data centre development, Charles Russell Speechlys’ Henry Winter and Abdul Azeem explore where construction, reliability, and ESG disputes are emerging

53 Engineers from Black & White Engineering examine the technical and operational shifts, from AI-driven design to sustainability-led engineering, that are reshaping data centre delivery as densities rise and power access tightens

56 Daniel Thorpe of JLL assesses how accelerating AI adoption is reshaping data centre demand, investment, and innovation in 2026

60 Ton van de Wiel of Signify outlines why DC-powered LED lighting is emerging as a key consideration in making data centre infrastructure more efficient and resilient

CONSTRUCTION POWER

63 As AI leads to larger data centres, Tate Cantrell of Verne warns design- and construction-phase decisions on power and cooling may shape what communities allow

66 Errol Bull of Momentive argues the case for reflective silicone coating as the preferred choice of roofing material for the data centres of tomorrow

70 Joao-Carlos Pereira Fialho of Cintoo evaluates the role reality capture can play in keeping processes organised and on track during the construction and ongoing operation of data centres

SPECIAL FEATURES

18 Show Preview

DCNN presents a comprehensive overview of what lies ahead at Data Centre World, taking place at Excel London on 4–5 March 2026

74 Advertorial

Baudouin presents its dedicated data centre genset range

87 Event Review

DCNN was delighted to be invited to attend AVK’s end-of-year event, ‘The Shape of Energy to Come’, hosted on 3 December at Pavilion City in London

76 Adhum Carter Wolde-Lule of Prism Power puts forward small modular reactors as a solution to the ongoing grid and power constraints caused by the rise of high-intensity data demand

78 David Keating of Echion Technologies explains why advanced energy storage is becoming increasingly central to maintaining resilience, efficiency, and power stability in data centres

81 Arturo Di Filippi of Vertiv explores how modern data centres are managing the variable fluctuations caused by fast-changing AI workloads

84 With the world’s biggest companies announcing major AI data centre expansions across the UK, Janitza’s David Gilligan and Roshan Rajeev provide some answers to the question: ‘Can Britain’s electrical infrastructure cope?’

KAO DATA DEMONSTRATES ‘SOCIAL VALUE BLUEPRINT’ FOR THE DATA CENTRE INDUSTRY

As Adam Nethersole, VP of Marketing at Kao Data (a developer and operator of data centres), notes, Environmental, Social, and Governance (ESG) initiatives have traditionally focused on energy efficiency and carbon reduction, yet they only address part of the ESG spectrum. As the UK data centre footprint expands beyond its traditional South East cluster, there is a growing need for operators to look at how they target the ‘S’ in ESG.

Kao Data’s ESG initiative in Greater Manchester, the Kao SEED Fund, has created a successful blueprint for the industry, showing how this can be done in partnership with the local community. The model enables small community groups and social enterprises to access small funding pots that empower them to deliver local environmental and social projects. Atop financial support, the company also offers the community groups mentoring and

showcases their projects through marketing and local media.

The SEED Fund in Stockport was delivered in partnership with Sector 3, a local charity infrastructure support service. According to Adam, working with an established third-party organisation which is already well-respected was key for ensuring governance and due diligence, as well as building trust within the community and developing stakeholder relationships. One of the key aspects to the Fund is that it supports grassroots projects from the people who live and work in the local area, rather than “parachuting in” with well-meaning but possibly not effective charitable ideas.

Through its first round, the Stockport SEED Fund distributed £30,000 across 20 community groups, backing projects that spanned environmental, social, and enterprise goals. Collectively, these projects have supported 359 individuals to date (and still counting) through the wide variety of different projects.

Following the success of the Stockport Fund, Kao Data has extended the SEED Fund model to Harlow and is planning a second Stockport Fund for 2026. Such programmes demonstrate how ESG can evolve from a corporate reporting exercise into meaningful, human-centred action that strengthens community relationships around digital infrastructure developments.

Kao Data, kaodata.com/initiatives/kao-seed-fund

AIRTRUNK TO OPEN NEW HYPERSCALE CAMPUS IN MELBOURNE

AirTrunk, an Australian hyperscale data centre operator, has announced the acquisition of a site in Melbourne’s north-west for its second campus in the city, to be known as MEL2.

With more than 354 MW of capacity, MEL2 will add over AUD $5 billion (£2.48 billion) in direct

investment and increase AirTrunk’s total deployable capacity in Melbourne to more than 630 MW. Across MEL1 and MEL2, the company’s investment in the city’s digital infrastructure will exceed AUD $7 billion (£3.45 billion).

MEL2 is expected to create more than 4,000 jobs during multi-phase construction and over 200 direct roles once operational, with a further 1,000 full-time jobs supported across the local supply chain.

The campus will complement AirTrunk’s existing Australian facilities, providing additional geographic diversity for cloud and AI customers.

Nationally, the company will operate five campuses across Sydney and Melbourne, delivering a combined capacity of more than 1.2 GW.

NORTH EAST ENGLAND DATA CENTRE HUB LAUNCHED

A new, not-for-profit forum, the North East Data Centre Hub, has been founded by RED Engineering Design, Cleveland Cable Company, CMP Products, Durata, and RWO Associates. The companies will collaborate on building a stronger local engineering, construction, and digital supply chain to support data centre projects across the region and beyond.

With the North East positioned as one of Europe’s largest data centre and AI infrastructure locations – driven by government policy, energy availability, and hyperscale investment – the hub is intended to support growth through regular engagement and industry collaboration.

John McGee (pictured right), Group CEO at Durata, says, “The hub provides an excellent opportunity for professionals in the sector – from developers and operators through to consultants and suppliers – to collaborate, share innovation, and exchange best practice.”

To mark the launch, the consortium will host its first networking event on 25 February at Liberty House in Newcastle. The event is already fully booked.

North East Data Centre Hub, 147734062.hs-sites-eu1.com/ne-datacentre

AirTrunk, airtrunk.com

PULSANT OPENS HIGH-DENSITY

UK FACILITY OUTSIDE LONDON

UK data centre operator Pulsant has completed a £10 million investment in a new, high-density data hall at its Milton Keynes site, SE-1. The facility has been developed to support increased demand for artificial intelligence and advanced computing workloads, with the expansion forming part of Pulsant’s national platformEDGE framework, extending high-performance, UK-based infrastructure outside the London market.

The 1.2MW expansion is designed for high-density computing applications, including AI, machine learning, and accelerated workloads. These use cases are commonly associated with sectors such as financial services, healthcare, biotechnology, IT, and gaming.

Rob Coupland, CEO at Pulsant, comments, “UK digital infrastructure is facing unprecedented demand. With AI-ready capacity in short supply, bringing high performance, flexibility, and choice to regional locations is critical.

“For organisations looking for ultra-low latency, international connectivity, and UK sovereign compute power, Milton Keynes is a great option compared to constrained and costly London data centres, which lack the opportunity for expansion.”

Pulsant, pulsant.com

TELEHOUSE CANADA AND MEGAPORT PARTNER TO EXPAND CLOUD OPTIONS

Data centre operator Telehouse Canada has partnered with Network-as-a-Service provider Megaport to expand cloud connectivity across its Canadian data centres.

The agreement allows customers to access Megaport’s global ecosystem, which includes more than 280 cloud on-ramps and over 300 service providers. Organisations can establish scalable,

private connections to major cloud platforms and IT services directly from Telehouse Canada facilities, supporting hybrid, multi-cloud, and traditional enterprise workloads.

Through the Megaport Portal, customers can design flexible network architectures, while services such as Megaport Cloud Router enable direct data transfer between multiple cloud environments. API-based integration also allows automated deployment and management.

The partnership provides access to Megaport’s AI Exchange, a connectivity ecosystem for AI workloads linking GPU-as-a-Service providers, neocloud platforms, third-party AI models, and compute and storage resources.

Telehouse Canada and Megaport plan to continue developing the collaboration, aiming to strengthen secure, high-performance digital infrastructure for Canadian organisations and global connectivity needs.

Telehouse Canada, telehouse.ca

TES POWER TO DELIVER MODULAR POWER FOR SPANISH DC

TES Power, a provider of power distribution equipment and modular electrical rooms for data centres, has been selected to deliver 48 MW of modular power infrastructure for a new greenfield data centre development in Northern Spain, designed to support artificial intelligence workloads.

As part of the project, TES Power will design and manufacture 25 fully integrated 2.5MW IT power skids. Each skid is a self-contained module incorporating cast resin transformers, LV switchgear, parallel UPS systems, end-of-life battery autonomy, CRAH-based cooling, and high-capacity busbar interconnections.

The skids are designed to provide continuous power to critical IT loads, with automatic transfer from mains supply to battery and generator systems in the event of a supply disruption – a requirement increasingly associated with AI-driven data centre operations.

Michael Beagan, Managing Director at TES Power, notes, “This project reflects exactly where the market is heading: larger, higher-density facilities that cannot tolerate risk or delay.”

TES Power, tesgroup.com

EUDCA PUBLISHES ITS NEW 2026 REPORT

The European Data Centre Association (EUDCA), the representative body of the European data centre community, has announced the publication of its 2026 State of European Data Centres report.

Building upon regional benchmarks established in last year’s report, the new data reveals a European market that has moved beyond the era of hub-centric development and is evolving into a distributed, energy-integrated, and AI-driven digital ecosystem.

Europe’s data centre sector is shown to be entering a period of exceptional expansion, structural diversification, and rapid technological transformation, driven by AI hyper-expansion. However, its ability to fully exploit potential growth is threatened by energy availability and access.

The new EUDCA report finds European market growth not only within traditional centres – such as Frankfurt, London, Amsterdam, Paris, and Dublin (FLAPD) – but also rapidly decentralising across Southern Europe, the Nordics, Central and Eastern Europe (CEE), and selected Tier2 metros.

Moving from cloud-led growth to AI demand, data centres are now recognised as critical infrastructure underpinning Europe’s competitiveness and security.

EUDCA, eudca.org

BUILDING FOR AI AT SCALE –ARE YOU READY?

Matthew Baynes, Vice President, Secure Power & Data Centres, UK & Ireland at Schneider Electric, details how the company will demonstrate its integrated power, cooling, and digital capabilities at Data Centre World 2026.

As the global competition for AI leadership intensifies, the UK is stepping up in its mission to become an ‘AI Maker’. As demand increases, so too does the need for the secure, scalable, and sustainable infrastructure to accommodate it. The UK ranks among the world’s top three data centre markets, and the industry sits at the core of the country’s AI ambitions, with the Government’s AI Opportunities Action Plan now designating data centres as critical national infrastructure (CNI).

Data Centre World in London is the industry’s largest gathering of professionals and end-users. During the event, as the UK’s energy technology provider, Schneider Electric will explore how we can scale AI infrastructure.

THE IMPACT OF INVESTMENT AND AI GROWTH ZONES

As previously mentioned, with the Government’s AI Opportunities Action Plan being backed by investment from big tech, data centres are now considered as critical national infrastructure.

This has opened the gates for large-scale innovation, investment, and opportunities. From Stargate UK to Google’s £5 billion commitment to AI infrastructure, announcements by major global technology companies have all strengthened the UK’s leadership position.

Exploring the UK’s position in the data centre market, on 4 March at 11.05am, I will discuss the importance of scaling AI responsibly in the UK, prioritising energy efficiency and innovation in data centres.

LIQUID COOLING: MEETING THE CHALLENGE OF DENSITY

As rack densities soar to support AI workloads, the challenge is no longer whether to adopt liquid cooling, but how to deploy if effectively at scale. On 4 March, 12.05-1.15pm, Andrew Whitmore, Vice President of Sales at Motivair by Schneider Electric, will chair a panel discussion on tackling liquid cooling challenges in data centres, and will unpack the innovations, risks, and realities behind the technology. During the session, Andrew will be joined by Karl Harvard, Chief Commercial Officer at Nscale; Ian Francis, Global Design and Engineering SME at Digital Realty; and Petrina Steele, Business Development Senior Director at Equinix.

HOW AGENTIC AI TRANSFORMS DATA CENTRE SERVICES

While AI is driving demand for data centre capacity, it is also transforming how these facilities are operated and maintained. On 5 March, 11.15-11.45am, Natasha Nelson, Chief Technology Officer at Schneider Electric’s Services business, will deliver a keynote exploring how agentic AI can transform data centre services at scale.

During the session, Natasha will explore the transformative role of agentic AI and Augmented Operations in delivering highly skilled technical services – both remotely and on site – for electro-sensitive environments such as large-scale data centres. She will unpack how AI-powered decision-making and human expertise can create a new era of service excellence, where every intervention is smarter, faster, and more sustainable.

BUILDING RESILIENT, END-TO-END, AI-READY DATA CENTRES

At Stand D140, Schneider Electric will showcase its complete, end-to-end, AI-ready data centre portfolio, enabling scalable, resilient, and sustainable AI infrastructure. Our solutions cover:

• Integrated power train — including Ringmaster AirSeT switchgear, Galaxy UPS, iLine busbar, and 800VDC sidecar

• Hybrid cooling solutions — including Motivair by Schneider Electric’s liquid cooling and coolant distribution units (CDUs)

• All-in-one modular infrastructure — AI POD (EcoStruxure Pod Data Centres) and Modular Data Centres

• Lifecycle Services — to support compliant and optimised operations

Our integrated power chain begins with the Ringmaster AirSet compact switchgear, directing high-voltage power and preventing overloads. The Galaxy UPS systems provide resilient backup, keeping AI servers running continuously. Inside facilities, the iLine busbar replaces cable complexity with overhead power bars, while the 800VDC sidecar delivers direct current to racks, avoiding conversion losses.

Lifecycle services orchestrate this seamless system, with the Galaxy UPS enabling rapid repair to essential cabling, controlling power safely. This de-risks expansion, ensures UK regulatory compliance, and delivers efficient, long-term AI infrastructure.

Together, these solutions demonstrate a fully integrated, AI-ready architecture, showcased digitally and in physical format at the stand. Experts from Secure Power, Digital Energy, and Power Products divisions will also be present to explore how these technologies enable UK organisations to lead the AI race.

SOFTWARE AND DIGITAL SERVICES

Our DCIM software solutions and services safeguard AI operations through monitoring, optimisation, and digital modelling. These include:

• EcoStruxure Data Centre Expert

• AVEVA and ETAP Digital Twins

• EcoStruxure Building Operation

• Power Monitoring Expert

The software pods demonstrate comprehensive digital solutions for monitoring, controlling, and optimising infrastructure.

EcoStruxure Data Centre Expert provides real-time power and cooling visibility, while Aveva and ETAP Digital Twins enable simulation, design, and automation of critical systems.

EcoStruxure Building Operation facilitates secure data exchange from third-party energy, HVAC, fire safety, and security systems. Power Monitoring Expert (PME) delivers electrical system insights for improved performance and sustainability, connecting smart devices across electrical systems and integrating with process controls for real-time monitoring.

Join us at Stand D140 during Data Centre World in London to be part of the conversation on scaling sustainable, efficient, and resilient data centres together.

Schneider Electric, se.com

Scale AI & Accelerated Compute Faster

EcoStruxure™ Pod for AI

Save time, reduce costs, and de-risk deployments with configurable, prefabricated IT infrastructure pods that adapt to any power and cooling architecture.

EcoStruxure™ Rack for AI

Maximise compute per rack and deploy faster with high performance rack infrastructure designed for the diverse networking, storage and computing demands of AI clusters.

Visit our booth D140 at Data Centre World London on 4-5 March to discover more.

DATA CENTRE CYBER SECURITY UNDER STRAIN

Nathan Charles, Head of Customer Experience at OryxAlign, warns of how AI-powered ransomware and automated attack tools are reshaping the threat landscape.

Ransomware groups and criminal networks now rely on automated toolkits that move with a speed few organisations can match. Recent threat analysis shows that many firms are struggling to keep pace as AI-powered attacks reshape how intrusions are planned and executed. For data centre operators, this shift carries particular weight, given the need to maintain availability across complex estates that support multiple services and customers.

Traditional security tools built around signature updates or static rules were never designed for this environment. AI alters code constantly and reshapes its own signals, which unsettles controls that depend on stable patterns. Automated probing can test weak points repeatedly and at high frequency, leaving teams to work through growing volumes of alerts while underlying threats continue to evolve. Recent research (Darktrace, 2025) reflects this pressure, stating that 78% of CISOs now admit AI-powered cyber threats are having a significant impact on their organisation. The challenge becomes sharper once monitoring enters the frame. Automated

systems now scan networks and endpoints continuously, though their outputs often require human context before teams can trust what they see. Signals may sit close to normal operational behaviour, particularly across shared infrastructure and live environments, which makes interpretation more difficult under time pressure. Attackers increasingly rely on AI to shape indicators that sit comfortably within routine activity, which makes it harder for teams to separate genuine threats from background noise.

Resilience takes shape when teams have clear visibility and the confidence to apply judgement as situations develop. For data centre operators, monitoring works best when it builds a steady picture of system behaviour and gives teams enough context to respond with confidence as conditions change. That balance underpins day-to-day confidence and helps data centre teams sustain availability across complex environments.

OryxAlign, oryxalign.com

Integrated

Modular Data Centres: Strategic solutions for pressing issues!

DCs are facing growing challenges like rising power demands, labour shortages, rapid growth of AI workloads... Traditional approaches are often too slow, costly, and unsustainable where speed, efficiency, and scalability are required.

R&M addresses this with modular, ready-to-use solutions. These support key areas including servers and storage, computing rooms, meet-me rooms, and interconnects.

Scan to contact us.

Reichle

DATA CENTRE WORLD 2026: THE LARGEST DATA CENTRE EVENT IN THE WORLD

DCNN presents a comprehensive overview of what lies ahead at Data Centre World, taking place at Excel London on 4–5 March 2026.

Technology leaders are once again preparing to gather in London for Data Centre World, which the organisers describe as “the event dedicated to the foundations that make data centres work in the real world.” This year’s show is set to cover a myriad of critical topics which are affecting the industry today. The key conference themes for 2026 are as follows:

REDEFINING DATA CENTRES: SUSTAINABILITY, RESILIENCY, AND TECH INNOVATION

As AI, cloud, and edge computing accelerate demand, data centres must become smarter, greener, and more resilient. Explore how net zero strategies, automation, and next-generation infrastructure are helping the industry respond to climate pressure, energy constraints, and geopolitical risk.

CULTIVATING A PEOPLE-FOCUSED DATA CENTRE WORKFORCE CULTURE

A skilled, inclusive, and resilient workforce is critical to data centre success. Discover how organisations are building human-centric cultures that attract talent, support growth, and enable long-term sustainability.

HYPERSCALE TO SOVEREIGN: EXPLORING REGULATION IN THE DATA CENTRE ERA

Regulatory pressure now extends beyond energy and sustainability into digital sovereignty and localisation. Learn how operators are adapting to evolving rules around data residency, cross-border transfers, and national compliance requirements.

AI-DRIVEN INNOVATIONS IN DATA CENTRE DESIGN FOR EFFICIENCY

AI is transforming data centre design and operations. Explore advances in thermal management, power

optimisation, and high-density layouts, alongside intelligent resource management, predictive maintenance, and AI-assisted capacity planning.

PROTECTING THE DATA CENTRE: SECURITY IN THE MODERN WORLD

As data centres become more critical and complex, security is paramount. Examine how operators are responding to cyber, physical, and regulatory threats through AI-driven detection, zero-trust architectures, and advanced access controls.

CIRCULAR ECONOMY AND WASTE MANAGEMENT

Circular economy principles are becoming essential to sustainable operations. Learn how operators are reducing waste, extending asset lifecycles, and building more regenerative infrastructure through reuse, refurbishment, and smarter design.

Atop the conferences, attendees will also have the chance to visit and explore a vast array of exhibitor stands. In the following pages, we have compiled a selection of exhibitors who will be showcasing their offerings at the show so you can gain an early insight into what’s in store.

Data Centre World, techshowlondon.co.uk/data-centre-world

THE DATA CENTRE’S SHIELD AGAINST ERRORS

In an industry where a single unplugged cable can stall a production line, “good enough” labelling isn’t an option. A leading automotive manufacturer faced a challenge: its cabling was becoming a maze of human error, threatening the uptime of its missioncritical services.

The solution wasn’t just better labels; it was a standardised identification ecosystem. By deploying industrial-grade materials and the high-volume BradyPrinter i7100, as well as the handheld M610, the manufacturer ensured that every rack and server remained clearly identifiable under any conditions.

This move towards precision eliminated the guesswork that leads to accidental disconnections. The result? A solid infrastructure where technicians move with confidence. Operational resilience starts at the surface, with a reliable label that stays readable. Click here to read and learn more about reliable identification solutions for data centres. You can also meet Brady experts at Data Centre World at the stand below.

Brady, brady.co.uk Stand F175

ROXTEC TO SHOWCASE CABLE AND TRANSIT SYSTEMS

Roxtec, whose UK and Ireland operation is based in Bury, Greater Manchester, manufactures cable and pipe transit systems. It provides specialised seals for cable and pipe penetrations that secure data centres against fire, water entry, and air leakage and which protect against electromagnetic interferences that can create outages.

Its new Roxtec FlamePlus transit system has been designed especially for data centres to meet the need for speed, simplicity, flexibility, and reliable protection. It offers a lightweight, fire-rated solution for multiple cables and pipes.

The company has also extended its product range to make it easier to seal underground entries for large cables, pipes, and conduits, helping avoid the risk of costly downtime and power outages.

Roxtec will be showcasing its products, including the Roxtec FlamePlus and Roxtec Software Suite digital toolkit, at the event.

Roxtec, roxtec.com Stand B135

GETTING AHEAD OF THE GAME ON COOLING SYSTEM DESIGN

As data centres evolve to support higher rack densities, AI workloads, and new cooling architectures, attention is increasingly focused on the performance and resilience of chilled water systems. While equipment such as chillers, pumps, and heat exchangers naturally attract scrutiny, valves often receive far less consideration despite their critical role in controlling, protecting, and balancing cooling circuits.

Cooling systems in data centres are required to operate at higher flow rates, tighter tolerances, and under more demanding conditions than ever before. Valve performance directly affects system stability, energy efficiency, and maintainability. Poor selection can lead to flow imbalance, restricted capacity, premature wear, or unplanned downtime, all of which carry operational and financial risk.

Material choice is a key consideration. Water quality, environmental exposure, and operating temperature can all influence long-term valve performance. In some applications, stainless steel or enhanced coatings may be required to protect against corrosion, while in others, a standard specification may be sufficient. Selecting a valve purely on cost, without considering the wider operating environment, can create issues later in the asset lifecycle.

Consistency and standardisation also play an important role. Data centre projects often involve thousands of valves across multiple systems. Standardising specifications where possible can simplify installation, reduce the risk of errors, and make future maintenance more straightforward. It also helps ensure spare parts availability and predictable performance as facilities expand or are upgraded.

Increasingly, early engagement on valve selection is proving beneficial. Involving valve specialists during the design and specification stages allows system requirements, access constraints, and future expansion plans to be considered upfront. This approach helps reduce rework, minimise delays during installation, and support long-term reliability.

With experience supporting data centre cooling applications across a range of project types, ERIKS works with designers and contractors to ensure valves are selected, specified, and applied appropriately for demanding chilled water systems.

To learn more about valve considerations for data centre cooling applications, click here to visit the website.

ERIKS, eriks.co.uk Stand F6

ERIKS TO SHOWCASE VALVE EXPERTISE

As AI-driven workloads place increasing demands on chilled water systems, ERIKS UK & I, which has recently become a Rubix company, is showcasing its valve expertise for data centre cooling applications on Stand F6.

Higher rack densities and new cooling architectures are placing greater strain on mechanical infrastructure, making valve reliability, consistency, and correct specification increasingly critical to system performance and long-term resilience.

ERIKS supports data centre HVAC and chilled water systems with a broad portfolio of valve technologies covering isolation, regulation, and

protection functions. Available across a range of sizes, materials, and actuation options, the offering helps engineers standardise specifications while accommodating differences in system design and future expansion.

The company encourages earlier engagement on valve selection during design and specification to help reduce the risk of rework, delays, or premature failure. Visit ERIKS to discuss valve requirements for data centre cooling applications. To learn more, click here to visit the company’s website.

ERIKS, eriks.co.uk Stand F6

RIELLO EXTENDS MODULAR MULTI POWER2 SERIES WITH M2X MODEL

Critical power protection specialist Riello UPS expands its modular series Multi Power2 with the new M2X option. Designed for smaller data centres and similar mission-critical applications, the company says the M2X can protect from 15 kW to 120 kW in a single cabinet.

It comes in a choice of three hot-swappable 2U power modules (15 kW IGBT, 15 kW Silicon Carbide, or 30 kW Silicon Carbide) and two cabinets (PCS, a standard power cabinet with integrated input, bypass, manual bypass, and output switches; or CBC, a combo cabinet with integrated batteries).

Both systems support N+1 configurations, flexible combinations of 15 kW and 30 kW modules, and parallel connection of up to six cabinets for a total maximum power output of up to 720 kW.

Riello UPS will be showcasing the Multi Power2 range, including the new M2X, at Data Centre World. The team will be available on their stand to discuss the company’s full range of data centre solutions.

Riello UPS, riello-ups.com Stand D90

DAIKIN TO SHOWCASE DATA CENTRE SOLUTIONS AT DATA CENTRE WORLD 2026

Daikin , a Japanese manufacturer of air conditioning and refrigeration systems, will use the event to demonstrate how advanced cooling technologies and specialist expertise can support the sustainable growth of Europe’s rapidly expanding data centre sector.

Building on its strong market track record, Daikin Applied will showcase solutions designed to meet the evolving needs of colocation providers and hyperscalers. Visitors to the stand will be able to engage directly with Daikin’s data centre specialists and explore how the company supports projects from early design and engineering through to commissioning, operation, and long-term service.

Data Centre World in London is a key meeting point for operators and suppliers seeking practical, future-proof approaches to balancing performance, reliability, and sustainability. Daikin’s presence underscores its commitment to helping customers meet rising capacity demands, tighter energy efficiency targets, and increasingly complex data centre designs.

A TRUSTED PARTNER FOR MISSION-CRITICAL ENVIRONMENTS

At the show, Daikin will present an overview of its data centre portfolio, covering cooling solutions for a wide range of applications and design philosophies. A key feature of the stand will be a mock-up of the new Pro-W Slim fan array unit, designed to deliver high efficiency, scalability, and operational flexibility. The unit supports modular design concepts and is optimised for reliability, ease of maintenance, and precise airflow control.

The company will also showcase its new coolant distribution unit (CDU), designed to support liquid-cooled architectures and high-density applications.

Alongside these innovations, Daikin’s portfolio includes air- and water-cooled chillers, heat pumps, air handling units, CRAH systems, and integrated control solutions. Combined with specialist engineering support and lifecycle services, the company delivers tailored, end-to-end cooling solutions for data centres of all sizes and complexity levels.

Daikin, daikinapplied.uk/data-centre-solution Stand B140

ONE YEAR ON: HOW ELEVATE IS REDEFINING DATA CENTRE INFRASTRUCTURE AT SPEED

It feels like yesterday that Elevate – Future Faster launched at Data Centre World 2025. Since then, the team have been working closely with operators, integrators, and partners to understand where white space designs struggle under pressure, namely: how density is increasing, how airflow and power must evolve, and how programmes need to accelerate without increasing operational risk.

Now, as Elevate returns for year two at Data Centre World on Stand B180, it isn’t “new for the sake of new”; it’s a platform that closes the gap between what modern data centres demand and what infrastructure can realistically deliver – more density, more control, and more scale, without complexity creeping in through the back door.

Elevate was built as an integrated ecosystem: fibre, racks, aisle containment, power, and security engineered to work together with clean installation, clear labelling, and predictable operation. In its second year, that ecosystem has expanded significantly, with wider choices for high density fibre, more robust airflow strategies, and smarter power and physical security options designed to make scaling easier.

ADDRESSING TODAY’S DATA CENTRE CHALLENGES

Modern data centres face a familiar set of pressures: rising density, faster change cycles, and tighter operational guardrails. Elevate is designed to help teams keep pace.

Densification is no longer optional. Port counts rise, but physical space doesn’t. Elevate’s high-density fibre solutions – VSFF, MPO, and modular ODF architectures – deliver more ports in the same rack unit space while maintaining front access, bend radius control, and clear labelling. The goal isn’t only to fit more, but to manage more.

Thermal performance is another sticking point. As loads increase, improvised airflow tactics break down. Elevate’s hot and cold aisle containment is engineered to integrate properly with racks, cable pathways, and power routes. The result is stable airflow separation and higher cooling efficiency across mixed hardware environments.

Power, too, needs to evolve. It is no longer enough to energise a rack; operators need visibility, telemetry, and control. Elevate’s high-density

intelligent power provides meaningful insight – usage, load, switching – so day two operations become more predictable and less prone to surprises.

Deployment speed matters as much as performance. To avoid delays and rework, Elevate prioritises pre-connectorised designs and engineered pathways. Pre-configured fibre assemblies and pre-populated ODF trays reduce on site variability, shorten install windows, and improve “first time right” outcomes.

Moreover, as estates grow, clarity becomes critical. Structured labelling, clean patch presentation, and tray level guidance help maintain consistency long after the initial build and far beyond the day one installation.

Fast, reliable availability rounds out the approach. Predictable supply chains and standardised configurations help teams maintain design intent and execute programmes without interruption.

ADVANCING THE ELEVATE PLATFORM FOR 2026

This year, Elevate introduces a number of key additions designed to meet the demands of increasingly dense, increasingly dynamic data centres:

1. VSFF ultra-high-density pre-connectorised fibre optics deliver far higher port density - up to 3456 fibres - within standard 1U panel formats, reducing splicing, test cycles, and deployment time.

2. Hot aisle containment supports facilities optimised around hot air capture and reuse, improving thermal stability as densities rise.

3. High-density intelligent power adds the visibility and control required to balance loads, automate switching, and support safe change windows.

4. Intelligent rack locking delivers scalable, auditable access control.

5. High-density ODFs with pre-connectorised trays provide structured, repeatable patching fields with predictable routing and clear documentation.

Alongside these additions, the DCR Rack Series, cold aisle containment, and MPO high-density, pre-connectorised solutions return with refinements that make dense builds easier to construct, cool, and maintain.

These aren’t isolated features; they’re responses to real operator pressures, helping teams design once, scale confidently, and maintain operational clarity.

EXPERIENCE THE ELEVATE PLATFORM AT DCW LONDON

The most reliable way to evaluate infrastructure is to see the engineering up close. At DCW London, Stand B180, you can explore ODF trays, routing paths, containment interfaces, intelligent power options, and rack level access control, as well as discuss how Elevate can support your growth, densification, or refresh plans for 2026. And while you’re there, enter Elevate’s on-stand competition for a chance to win a pair of Apple AirPods.

Elevate, elevate. excel-networking.com Stand B180

ELEVATE VSFF: HIGH-DENSITY FIBRE FOR FUTURE-READY NETWORKS

As data centre demands intensify, the Elevate VSFF innovation delivers next-generation, high-density fibre connectivity, engineered for scalability and performance. Leveraging industry-leading SENKO SN and SN-MT technology, Elevate enables up to 3,456 fibres in a single 1U rack space, maximising capacity while minimising footprint.

Designed for hyperscale, edge, and enterprise environments, Elevate VSFF ensures robust, low-loss connectivity for mission-critical applications. Its modular design supports seamless migration paths, with integrated cable management and clear labelling for simplified installation and maintenance.

Whether you’re consolidating servers, deploying AI clusters, or future-proofing your infrastructure, Elevate VSFF offers cost-effective scalability and flexibility. The solution includes patch panels, cassettes, and cable assemblies, all optimised for high performance and easy integration.

To discover how Elevate VSFF can transform your white space, visit the Elevate website or contact elevate@excel-networking.com for more information and to find out where you can download the latest VSFF Family Brochure.

Elevate, elevate.excel-networking.com Stand B180

VERTIV LAUNCHES AI PREDICTIVE MAINTENANCE SERVICE

Ahead of Data Centre World, Vertiv, a global provider of critical digital infrastructure, has launched a new AI-powered predictive maintenance service, Vertiv Next Predict, aimed at modern data centres and facilities supporting AI workloads, including AI factories.

The managed service is designed to move maintenance away from time-based and reactive models, using data analysis to identify potential issues before they affect operations. Vertiv says the service supports power, cooling, and IT systems, with the aim of improving visibility and supporting more consistent infrastructure performance.

At the show itself, Vertiv is a Gold Sponsor, with the company stating that it is “moving beyond individual components to showcase the Integrated Unit of Compute” at the Vertiv VIP Lounge (DCW5).

This year, the company says its presence is centred around three core pillars: the AI-ready power train, the thermal chain, and speed of deployment. Additionally, the team will offer thought leadership sessions.

Vertiv, vertiv.com Stand D90

AFL: WHY DATA CENTRE LEADERS ARE HEADING TO STAND C110

Your AI clusters are hungry for bandwidth. GPU-to-GPU latency is make or break, and you’re being asked to scale yesterday, all while maintaining uptime, managing density, and staying within budget. AFL, a manufacturer of fibre optic cables and connectivity equipment, understands. It has engineered solutions specifically for these problems.

What you’ll experience at Stand C110:

• Hands-on demos

• Industry-first technology

• Solutions for your biggest bottlenecks

• Modular white space infrastructure you can deploy rapidly

• AI-GPU connectivity optimised for ultra-low latency compute fabrics

• High-density DCI solutions that maximise available space in cable ducts

• Pre-terminated, plug-and-play modules with full traceability to help you deploy faster

• Fujikura’s Multi-Core, Hollow-Core, and Mass-Fusion splicers in action – the precision tools that research labs and hyperscalers trust for next-generation fibre deployment

• Small-form-factor assemblies – reduce diameter, increase density, maximise airflow and cable pathways

• Test with confidence – advanced inspection tools that validate performance before the first packet flows

WHY AFL FOR HYPERSCALE DATA CENTRES?

• Globally available — consistent supply chain, wherever you build

• Proven reliability — supporting the world’s largest hyperscale networks

• Modular and scalable — grow your infrastructure without forklift upgrades

• Built for AI workloads — engineered for the bandwidth and latency demands of dense GPU clusters

WHO SHOULD VISIT THE STAND?

• Network engineers deploying or upgrading DCI links

• Data centre architects planning next-generation AI infrastructure

• Infrastructure leaders evaluating fibre solutions for hyperscale growth

• Operations teams seeking faster commissioning and maintenance workflows

READY TO ENHANCE HYPERSCALE EFFICIENCY?

Bring your toughest connectivity challenges to Stand C110 and see how AFL’s team is already solving the real-world problems you face with innovative solutions ready for immediate global deployment. Find out how its optical fibre experts can help you scale seamlessly across growing hyperscale deployments for AI and cloud.

AFL, aflglobal.com Stand C110

MOLEX TURNS INFRASTRUCTURE INTO ADVANTAGE

Today’s data centres and enterprises face pressure to move faster, scale seamlessly, and maintain uptime. The network infrastructure beneath it all can’t just keep up; it must lead the way.

For over 40 years, Molex has helped organisations rethink structured cabling as a strategic asset. Its ‘Enterprise Cabling Infrastructure’ delivers scalable copper and fibre systems for buildings across sectors. Its ‘Data Center Solutions’ extend that reliability to high-density optical fibre environments, enabling faster deployment and effortless growth.

With proven global expertise and end-to-end support, Molex turns infrastructure into advantage, backed by a 25-year System Performance and Application Assurance Warranty.

What sets Molex apart:

• Design for what’s next — infrastructure built to handle tomorrow’s requirements

• One ecosystem — copper, fibre, and accessories engineered to work seamlessly together

• Global reach — technical experts and installers in more than 50 countries

• Assured performance — every installation guaranteed for reliability and longevity

• Tailored collaboration — custom solutions engineered around you

Molex, molexces.com

CARRIER LAUNCHES CDU WITH 2°C ATD

Carrier, a manufacturer of HVAC, refrigeration, and fire and security equipment, has introduced a new coolant distribution unit (CDU) designed to support the growing use of liquid cooling in UK data centres

while improving energy performance, resilience, and space utilisation.

The CDU uses modular heat exchangers that can deliver approach temperatures as low as 2°C, compared with more typical 4°C systems. According to Carrier, this can enable up to 15% chiller energy savings, allowing more electrical capacity to be allocated to IT loads rather than cooling.

Oliver Sanders, Data Centre Commercial Director UK&I, Carrier HVAC, notes, “Data centre leaders across the UK are focused on increasing capacity without increasing risk.

“This new Carrier CDU supports that goal by giving operators greater thermal stability, more flexibility in system design, and better visibility of cooling performance. The result is improved energy efficiency and smoother scalability as liquid cooling demand grows.”

Carrier, carrier.com

EXPERT CLEANING FOR CRITICAL ENVIRONMENTS

IT Cleaning is the UK’s trusted authority in specialist IT and technical cleaning to ISO 14644-1 2022 Class 8, delivering expert services where precision is critical and failure is not an option. From data centres and data halls to server rooms and comms rooms, the company protects vital infrastructure with meticulous, industry-approved cleaning solutions.

Every service is carried out by highly trained technicians using advanced anti-static methods, designed to safeguard sensitive equipment and reduce operational risk. With minimal disruption and maximum attention to detail, IT Cleaning ensures technology environments remain clean, compliant, and performance ready.

Operating nationwide, the company cleans for organisations that demand absolute reliability, strict compliance, and exceptional standards. Its reputation is built on technical expertise, consistent delivery, and a no-compromise approach to quality.

For businesses that depend on uninterrupted IT performance, IT Cleaning is the specialist cleaning partner of choice.

DATA CENTRE CLEANING AND CONTAMINATION CONTROL SPECIALISTS

• Protection of equipment during refurbishment • 24/7 nationwide service

A CLOSER LOOK AT CONNECTIVITY FOR AI DATA CENTRES

Carsten Ludwig, Market Manager DC at R&M, examines how AI workloads are reshaping network design and why modular, repeatable connectivity frameworks are essential.

AI and high-performance computing (HPC) are reshaping the data centre landscape and driving unprecedented workloads. Training clusters and inference farms are boosting extreme rack densities, east–west traffic is now measured in terabits per second, and power envelopes are pushing traditional designs to their limits. Analysts and vendors estimate that facilities supporting large AI models may need up to five times more internal connectivity than “classic” hyperscale designs.

AI factories are designed for maximum efficiency, performance, and integrated energy management. They require modular, standardsbased architectures that can scale quickly while avoiding vendor lock-in. Operators must extract the most value from every watt, deploy only the

resources that are truly needed (racks, power, cooling, network, servers, storage), and work towards “utility net zero” designs.

Just like a pre-designed/modular approach to data centre development, a pre-defined structure for cabling is key to faster deployment.

To support current and future environments, connectivity must deliver both low latency and very high data volumes. In AI-driven facilities, the packing density of optical connectivity is breaking records: single rack units can now host several hundred fibre terminations, and modern platforms reach well beyond 400 connections per height unit. In parallel, cloud and hyperscale operators report that GPU clusters and AI “pods” demand ultra-high-density back-end fabrics to keep up with parallel GPU-to-GPU traffic.

Very Small Form Factor (VSFF) interfaces such as SN, CS, MDC, and newer multi-fibre variants (like MMC) multiply the number of fibres per front panel port, typically delivering around three times the density of duplex LC. Ultra-slim ribbon designs (down to 200 μm class fibres and very compact ribbons) allow cable counts to climb without blocking airflow. Such ribbons, combined with VSFF connectors, are key to making 400G-/800G-capable AI fabrics physically feasible in constrained racks.

The key principle: densify fibre without sacrificing manageability. That means platforms should preserve bend radius, offer clear labelling and guided routing, and allow technicians to work on a single drawer or cassette without disturbing adjacent ones.

OBSERVABILITY BY DESIGN: FROM DCIM TO AI FABRIC TELEMETRY

Patch systems need to combine extreme density with integrated cable management and hooks for automated infrastructure management (AIM) sensors. That instrumentation is a prerequisite for the kind of AI-ready DCIM the market is moving towards: real-time visibility into which ports, fibres, and trunks carry which AI workloads, so that changes can be automated and audited instead of documented after the fact. A single mispatched link or unplanned dB of loss can strand GPUs or degrade training jobs for days.

Designing, building, operating, and optimising an “AI-ready” data centre calls for modular, repeatable building blocks, rigorous testing, and careful management of optical losses and redundancy.

DCIM should support operations managers in two core areas: asset management (knowing exactly what is in the data centre and how it is interconnected) and operational efficiency (real-time visibility into what is happening). This is essential for rapid root-cause analysis when a GPU island underperforms, change automation for large-scale re-cabling or pod expansion, and continuous verification that redundancy and Tier targets are being met.

IP-OVER-DWDM WITH COHERENT PLUGGABLES

At the WAN and DCI edge of the AI data centre, pressure is building to collapse footprint, power, and cost. DWDM and coherent pluggables (currently 400ZR, and soon 800ZR/ZR+) enable IP-over-DWDM – shrinking footprint, reducing power consumption and cost per bit, and simplifying upgrades to 800G and 1.6T. 800ZR can deliver dense 100/400/800GbE aggregation with better energy efficiency and simpler upgrade paths for AI DCI. For AI-heavy sites, this strategy delivers three wins:

• Lower power per bit, because the coherent DSP is integrated into router optics

• Reduced footprint and complexity, critical where whitespace is dominated by GPU racks and cooling plant

• Easier scaling to 800G and 1.6T, as the fibre plant and mux/demux layer are engineered once with enough headroom and future bandwidth arrives via new pluggables

However, it’s important to know IP-over-DWDM only works if the underlying fibre plant is engineered with strict loss budgets, high connector quality, and predictable dispersion characteristics.

REPEATABLE BUILDING BLOCKS AND RIGOROUS TESTING

Given AI’s growth rate, bespoke designs are a liability. Most successful operators aim for repeatable building blocks: AI pods, or rows defined by a fixed number of racks, power feeds, cooling capacity, and connectivity slots. Connectivity inside each block should be defined by standardised cassette and trunk topologies (e.g. BASE-8 or BASE-12), pre-terminated, factory-tested links to reduce on-site splicing and errors, and clearly documented migration paths from 100G to 400G, 800G, and beyond.

Because loss budgets at 400G/800G are tight, test discipline is essential. Best practices include end-face inspection and cleaning for every connector, Tier-1 and Tier-2 testing (including OTDR) for new and modified links, and maintaining design-level loss budgets and regularly re-validating them as plant evolves.

REDUNDANCY AS A DESIGN PARAMETER

AI data centres must map redundancy explicitly to Tier objectives – not just at the power and cooling layers, but at the connectivity layer too. That implies:

• Dual or diverse fibre routes between AI pods, core, and DCI

• Independent patch paths and separate structured cabling “planes”, where business cases justify it

• Clear failure-domain definitions so that planned and unplanned work can be isolated without taking entire AI clusters offline

Modular connectivity platforms with integrated AIM help; operators can verify that A/B feeds really are diverse in the field, not just in the CAD drawing.

AI will keep pushing the limits of data centres. Facilities engineered for AI efficiency and performance will combine utility net zero-aligned energy architectures with modular, standards-driven connectivity – that is dense VSFF-based fibre plants, observable high-density platforms, and IP-over-DWDM transport built on 400ZR today and 800ZR/ ZR+ tomorrow. Combining that with repeatable building blocks, rigorous testing, and carefully engineered redundancy, you get an AI data centre architecture that scales fast without sacrificing reliability, efficiency, or freedom of choice.

R&M (Reichle & De-Massari AG), rdm.com

Environmental monitoring experts and the AKCP partner for the UK & Eire.

How hot is your Server Room?

Contact us for a FREE site survey or online demo to learn more about our industry leading environmental monitoring solutions with Ethernet and WiFi connectivity, over 20 sensor options for temperature, humidity, water leakage, airflow, AC and DC power, a 5 year warranty and automated email and SMS text alerts.

projects@serverroomenvironments.co.uk

MORE MOFNs: COLLABORATIVE WHOLESALE TRANSPORT IN THE ERA OF RAPID DATA CENTRE GROWTH

Tim Doiron, VP Solution Marketing at Nokia Optical Networks, outlines how Managed Optical Fibre Networks (MOFNs) are evolving to help hyperscalers meet soaring interconnect demands.

2025 CapEx spending for the top four hyperscale companies was nearly $400 billion (£297.6 billion), according to Cignal AI’s latest estimates, or a jump of more than 60% each year for the past two years. So, where is all this CapEx going? Data centres. More specifically, AI-enabled data centres with GPU-based computing infrastructure.

Across the globe, new data centres are breaking ground at a record pace, from greenfield industrial parks in major metros to repurposed brownfields. However, we know data centres don’t exist in isolation; they need high-speed optical connectivity to connect to other data centres and internet exchange locations, enabling the distribution of workloads,

storage, and – increasingly – training large language models (LLMs). They then use the trained models for distributed inferencing or responding to generative AI queries from applications like ChatGPT, Gemini, or Copilot. This is where Managed Optical Fibre Networks (MOFN) come in.

As seen in Figure 1 below, data centre operators have three choices for optical data centre interconnect (DCI): leasing capacity from a communication service provider (CSP), building and managing their own private optical network, or embracing a MOFN hybrid model and collaborating with a network operator to deliver a tailored transport solution.

REGULATIONS, RISK, RESOURCES

Where hyperscalers need 10s or 100s of Tb/s, they tend to build and manage their own private networks. However, three considerations may push them towards a MOFN solution: regulations, risk, and resources. In parts of the world like India, hyperscalers are barred from providing optical transport services. In other places, like new markets in the Middle East or Africa, hyperscalers may reduce risk by leveraging a known incumbent for their optical transport services. In other cases, hyperscalers may simply lack the knowledgeable, in-region optical networking resources needed to support the network.

With MOFNs, hyperscalers drive the network architecture and optical vendor selection, but they avoid the ongoing operations and lifecycle management of the network, as that is the responsibility of the CSP. The hyperscaler gets the transmission capacity they need, when and where they need it, while the CSP secures a new customer and a recurring revenue stream, leveraging their skills, knowledge, and deep experience in running networks.

MOFNs ON THE MOVE

accelerating time to service delivery and transforming networks. We refer to these three examples as the cookie-cutter, the thin transponder, and the L-band.

THE COOKIE-CUTTER

While every MOFN is unique, if you are a CSP and you find a recipe that works, you stick with it. In this instance, a CSP leveraged a prepackaged, compact, modular solution to deliver multiple terabits-per-second of transmission capacity between two colocated edge data centres, as seen in Figure 2 below. With this initial success, the solution was then standardised and rapidly deployed at additional edge data centres in other cities.

THE THIN TRANSPONDER

2

Hyperscalers, CSPs, and leading optical DCI vendors have collaborated on MOFNs for years, with network designs dominated by the same high-performance, embedded optical engines and compact, modular optical platforms that hyperscalers have been using in their own private networks.

Recently, however, a shift has begun, with more diversity and flexibility in MOFN designs than ever before. This evolution is being driven by massive data centre expansion, rapid AI adoption, and an emphasis on time-to-service delivery. Let’s look at three examples of how MOFN designs are

To date, MOFN networks have not relied heavily on coherent pluggables. However, with the availability of 800 ZR/ZR+ coherent pluggables delivering up to 1,700 km reach, CSPs are taking notice. By enabling easy and operationally consistent deployment of multiple coherent pluggables per thin transponder sled, as seen in Figure 3, CSPs can halve their footprint and reduce power and CapEx by up to 40%. While still early in its lifecycle, CSPs are leaning into thin transponders and coherent pluggables to meet the capacity demands of hyperscalers and MOFNs.

Figure

THE L-BAND

As mentioned previously, we know that hyperscalers generally deploy compact, modular optical platforms for their private DCI networks. So how can a CSP with an incumbent regional, national, or cross-border traditional optical network deliver a high-capacity dedicated MOFN solution without building a new compact modular-based network? The answer lies in the L-band.

The vast majority of CSP networks were built as C-band only. That means that a

dedicated hyperscale network exists by lighting up the L-band. That’s exactly what an EMEA CSP recently did to win a cross-border MOFN network, as shown in Figure 4 below. The CSP leveraged its extensible optical line system to deliver a dedicated L-band MOFN – faster, with lower costs, and with less risk than a greenfield build. On a recent DCI webinar, CSPs mentioned plans to utilise L-band expansion to secure future MOFN deals. So, while the EMEA network mentioned here may be an early L-band MOFN example, it clearly won’t be the last.

WRAPPING IT UP

Data centre expansion driven by massive hyperscaler capital investments and rapid AI adoption is fuelling growth in optical connectivity solutions. MOFNs offer a collaborative approach amongst hyperscalers, CSPs, and leading optical DCI vendors to efficiently meet high-capacity transport needs. While MOFN solutions have looked a lot like hyperscale

private networks, MOFN diversity is growing due to service delivery time and cost pressures. While the use of prepackaged cookie-cutter approaches, thin transponders with coherent pluggables, and L-band expansion on existing CSP networks are all part of the evolving MOFN landscape, these won’t be the last optical network innovations we see in the AI era.

Nokia, nokia.com

Figure 3
Figure 4
Prof. Dimitra Simeonidou
Xi Chen,
Bell

SOLVING AI LATENCY AT SCALE: HOW ETHERNET IS CLOSING THE GAP WITH INFINIBAND

Dan Hanson, Director, AI Fabric Product Management at Supermicro, explains how advances in RDMA, congestion control, and traffic engineering have brought ethernet fabrics to parity with InfiniBand for latency-sensitive AI workloads.

Training large AI models depends on low-latency, high-throughput interconnects between GPUs. As model sizes increase and GPU clusters grow, the network becomes a limiting factor. While InfiniBand has traditionally been the go-to solution, ethernet fabrics have matured into a viable and cost-effective alternative for latency-sensitive AI workloads.

AI LATENCY CHALLENGES

At the core of distributed AI training lies the need for fast and synchronised communication between GPUs. Every training iteration involves multiple nodes processing data in parallel and then exchanging results with their peers. As cluster sizes grow, the speed and consistency of these exchanges directly impact job completion time (JCT) and overall performance.

A key challenge is not just average latency, but tail latency, the delay introduced when one or more messages complete significantly later than the rest. The result is a longer JCT, where the entire training process slows down because it must wait for the slowest messages to finish. These high-volume, latency-sensitive data exchanges between GPUs – often referred to as ‘Elephant Flows’ – are especially prone to tail latency issues.

In traditional deployments, InfiniBand has addressed these challenges with purpose-built silicon and a well-integrated RDMA software stack. However, new developments in ethernet technology are closing this gap, offering a more open and flexible option for high-performance AI fabrics.

ETHERNET’S MATURITY AS AN AI FABRIC

Ethernet has evolved significantly from its general-purpose networking roots. With the adoption of RDMA over Converged Ethernet version 2 (RoCEv2), Explicit Congestion Notification (ECN), and Dynamic Load Balancing (DLB), ethernet can now support low-latency, high-throughput workloads where, typically, InfiniBand was the number one choice. These features enable lossless transport and fine-grained traffic management, making ethernet suitable for the performance demands of AI fabrics.

Today’s high-performance ethernet NICs and switches support RDMA to bypass the host kernel stack, reducing CPU load and enabling direct memory transfers between GPUs. This bypass avoids intermediate data copies and eliminates the overhead of CPU context

switching, which is especially important in systems where GPUs far outnumber CPUs, as is the case in AI fabrics.

The open ecosystem of ethernet brings additional benefits. Network operators can choose from a broad set of vendors, tools, and software platforms, increasing flexibility while reducing lock-in. This also makes it easier to integrate AI networks into existing ethernet-based infrastructure, using familiar tooling and operational practices.

Recent tests comparing RoCEv2 ethernet with EDR InfiniBand show similar performance across a wide range of message sizes when using properly configured networks. While InfiniBand can offer slightly lower-per-hop forwarding latency at the ASIC level, real-world performance is often dominated by the higher latency in transferring data between the NIC and GPU memory, especially over PCIe interfaces. Such results underscore how far ethernet has come as an AI-ready network fabric.

MANAGING TAIL LATENCY IN AI WORKLOADS

In large AI workloads, performance bottlenecks often arise not from compute limitations, but from delays in data movement across the network. When multiple GPUs must synchronise during training, they exchange significant volumes of data. These high-volume flows between accelerator nodes are especially sensitive to latency variations.

A single delayed transfer can hold up an entire iteration, making the longest transfer the determining factor in JCT. Minimising tail latency is therefore essential to keeping JCT within acceptable limits. Technologies such as ECN proactively signal congestion to the sender, while DLB redirects traffic away from congested paths in real time. These mechanisms help maintain predictable latency even under heavy network load.

Further improvements can be achieved through intelligent workload placement. For example, using rail-optimised designs reduces

the number of switch hops, so ensuring that only a single silicon stage separates communicating GPUs, rather than three to 11 stages in less optimised layouts, significantly lowers communication delays and overall data exchange time, mitigating tail latency.

ADDRESSING THERMAL AND POWER CHALLENGES

As AI workloads scale and interconnect bandwidth increases to 800G and beyond, power and thermal design limitations are becoming more pressing at the network fabric level. Ethernet switches supporting large GPU clusters must handle dense port configurations and continuous high-throughput workloads, which can drive up both heat output and energy consumption.

To address these challenges, data centre operators are increasingly adopting direct liquid cooling (DLC). By removing heat directly at the silicon or optical interface, DLC enables higher sustained performance while reducing the reliance on traditional air-based cooling systems. This helps lower overall power usage and improves data centre efficiency.

In parallel, co-packaged optics (CPO) is

emerging as a solution to reduce power draw and improve signal integrity by integrating optics closer to the switch silicon. Combining CPO with DLC creates a path towards lower power usage effectiveness (PUE) while maintaining the bandwidth and latency performance required by AI workloads.

Ethernet switches incorporating these cooling and packaging innovations are becoming well suited for next-generation AI deployments, offering both operational efficiency and environmental sustainability.

ETHERNET HAS ARRIVED FOR AI

For organisations looking to scale AI infrastructure, ethernet no longer represents a compromise. Its ability to deliver low-latency, high-throughput, and operational efficiency – combined with open standards and compatibility with mainstream data centre environments – makes it a compelling choice. With continued advancements in RDMA, traffic management, and energyefficient design, ethernet is now well positioned to support the next generation of GPU clusters at scale. It is becoming the fabric of choice for scalable, cost-effective, and futureready AI deployments.

Supermicro, supermicro.com

FIBRE CLEANLINESS CHALLENGES IN HYPERSCALE DATA CENTRES

Liam Taylor, European Business Manager, Fibre Optics at MicroCare UK, illuminates how disciplined cleaning processes are critical to tackling the operational impact of fibre contamination.

The rapid expansion of hyperscale and colocation data centres has fundamentally changed the way optical networks are designed and maintained. Driven by cloud computing, artificial intelligence workloads, and highbandwidth applications, modern facilities now rely on extremely dense fibre infrastructures to support vast volumes of internal data traffic while maintaining low latency and high reliability.

Ultra-high-fibre-count (UHFC) trunk cables are central to this densification. Where an 864-fibre trunk was once considered substantial, fibre counts of 1,728 fibres per trunk are now commonly deployed in large data centre environments. These higher fibre counts allow operators to scale capacity efficiently within constrained physical space, but they also increase the complexity of installation, inspection,

and ongoing maintenance activities.

WHY FIBRE CLEANLINESS MATTERS AT SCALE

As fibre density increases, the margin for error at the connector interface becomes increasingly small. In single-mode optical systems, the fibre core is only a few microns in diameter, meaning that even microscopic contamination on an end face can interfere with light transmission. In UHFC environments, where thousands of connectors are handled during installation and maintenance, the probability of contamination rises significantly.

Poor fibre cleanliness typically manifests in several ways:

• Increased insertion loss caused by scattering or partial obstruction of the optical signal

• Elevated back-reflection that degrades signal integrity and can affect transmitter stability

• Intermittent faults or service interruptions caused by embedded debris or damaged end faces

In large data centres supporting latency-sensitive workloads, these effects can escalate from minor degradation to measurable operational impact.

INSPECTION STANDARDS AND BEST PRACTICE

To manage these risks, end face inspection and cleaning prior to connector mating is widely regarded as essential practice. IEC 61300-3-35, published by the International Electrotechnical Commission (IEC), defines acceptable levels of contamination and surface defects on fibreoptic end faces and provides a consistent framework for evaluating connector condition before a connection is made. Applying this standard helps ensure that connectors meet defined quality thresholds before they are introduced into the network.

Inspection alone, however, does not prevent contamination. Clean working environments and disciplined handling procedures play a critical role in reducing the introduction of dust, fibres, and residues during installation. Data centre build and maintenance activities often take place alongside other trades, such as electrical and mechanical installation teams, increasing the likelihood of airborne contamination if controls are not in place.

TRAINING AND HANDLING

As fibre replaces copper in more high-bandwidth and longer-reach applications, installation teams are increasingly required to work with optical systems at much higher densities than in the past. Effective training, therefore, extends beyond basic cleaning technique. Technicians must understand contamination sources, recognise the difference between removable debris and permanent surface damage, and follow a consistent inspect–clean–inspect workflow.

Handling discipline is particularly important in UHFC installations. End faces that have been cleaned and inspected can be quickly re-contaminated if protective caps are removed too early or if connectors are placed on unclean surfaces. Minimising unnecessary handling and keeping connectors protected until mating reduces the likelihood of repeat cleaning cycles and associated delays.

CLEANING METHODS AND MATERIAL CONSIDERATIONS

The materials and methods used to clean fibre-optic connectors have a direct impact on cleaning effectiveness, repeatability, and long-term network reliability. In high-density optical environments, even small variations in technique or material choice can have a disproportionate effect once systems are live.

General-purpose cleaning fluids such as isopropyl alcohol (IPA) and conventional wipes are sometimes used in the field, but they are not optimised for fibre-optic interfaces. IPA can leave residues if evaporation is uneven, particularly in humid environments, while lint-generating wipes risk scratching the ferrule surface or depositing fibres directly into the contact zone. In dense UHFC installations, these risks are amplified by limited access and the sheer number of connectors involved.

Purpose-designed fibre-optic cleaning materials address these challenges by combining controlled cleaning fluid purity with lint-free delivery mechanisms. High-purity optical cleaning fluids dissolve oils, salts, and particulate

contamination without leaving residue, while lint-free wipes, precision cleaning sticks, mechanical click-to-clean tools, and touchless, automated cleaners provide controlled, repeatable cleaning action suited to high-density environments.

COMMON CONNECTOR CLEANING APPROACHES

Several connector cleaning approaches are now routinely used in data centre environments, each suited to different contamination levels, connector formats, and operational constraints. In most cases, optical-grade cleaning fluid is used alongside the chosen tool to loosen and lift contamination prior to removal.

• Optical-grade cleaning fluids — used across multiple cleaning methods to remove oils, salts, and microscopic contamination without leaving residue when applied in controlled quantities

• Lint-free wipes — typically used for accessible connector end faces, and effective for light particulate and film-based contamination when used correctly

• Precision cleaning sticks — designed for confined spaces such as adapter sleeves and transceiver ports, providing targeted mechanical action in high-density panels

• Mechanical click-to-clean tools — widely used in dense patching environments, delivering a consistent cleaning motion within adapters and ports

THE GROWING ROLE OF AUTOMATED CLEANING

Manual cleaning methods continue to play an important role in fibre-optic installation and maintenance, particularly where connectors require targeted mechanical action. As fibre counts increase, maintaining consistent results across large connector populations places greater emphasis on repeatable processes and appropriate tool selection.

To support this, automated, contactless cleaning systems are increasingly being introduced alongside manual methods in hyperscale and large colocation environments. These systems apply a precisely metered burst of optical-grade cleaning fluid, followed by filtered air, to remove contaminants and evaporate residual fluid. Because the process is contact-free, the risk of scratching or embedding abrasive particles into the ferrule surface is reduced.

Automated systems also provide uniform coverage across the entire end face, reducing the likelihood of residual debris migrating towards the fibre core after mating. In practice, many data centre teams now adopt a hybrid cleaning strategy, using manual methods where needed as well as automated systems to support high-throughput cleaning during commissioning and maintenance.

BEYOND MAINTENANCE

As fibre counts reach 1,728 and beyond, contamination at the connector interface has evolved from a maintenance concern into an infrastructure reliability risk. Achieving consistent cleanliness at scale requires more than appropriate tools; it demands structured workflows, trained personnel, and organisational commitment to inspection standards. In UHFC environments, the data centres that embed contamination control into their deployment strategy, rather than treating it as an afterthought, will maintain the performance and uptime advantages that hyperscale operations depend upon.

MicroCare, microcare.com

ARE DATA CENTRE UPGRADES WORTH THE COST OF UNEXPECTED DELAYS?

With retrofit projects and upgrades becoming more common, Dan Evets, Vice President at ALICE Technologies, suggests the adoption of generative scheduling platforms to mitigate the resultant impacts.

Retrofits are now a regular feature in the data centre lifecycle. However, familiarity does not equal simplicity. As pressure mounts to expand capacity and adapt to evolving technologies, asset owners are having to carry out complex upgrades more frequently and with less margin for delay. Moreover, when even small disruptions can add cost, this can lead to significant financial consequences.

Unlike new builds, data centre retrofits are often carried out alongside live operations. Teams are expected to modify core systems such as power and cooling with as little interruption to ongoing service as possible. Ultimately, delays kill profit. We’ve seen examples where a single day behind schedule can lead to losses in the millions.

Legacy construction project scheduling systems tend to follow a rigid, linear format. They were never designed to model shifting site constraints or evaluate alternative delivery

routes in real time. When timelines are tight and the programme changes, those tools offer little flexibility.

DE-RISKING THROUGH SIMULATION

To overcome this, many data centre operators are adopting generative scheduling platforms, like ALICE Technologies, that can simulate risk in advance. Rather than committing to a fixed approach, these tools allow project teams to test different options, assess the impact of potential disruption and cost, and develop more resilient plans.

Inputs like material lead times, labour availability, and weather forecasts can all be factored into early-stage planning. This not only gives project managers a clearer view of what could cause delays, but also prepares them to act quickly when timelines start to shift.

One North American data centre operator used this approach to address a major procurement issue. A 30-day delivery delay threatened to derail the project timeline. Using simulation software, the team tested a range of solutions. Within hours, they identified a plan involving targeted overtime for selected trades, which kept the project on track and avoided an estimated £25 million in lost revenue.

BUILDING SMARTER THROUGH REPETITION

The benefits become even greater when applied across a portfolio. While every upgrade has its own challenges, many follow a similar structure, particularly around infrastructure elements (like rack reconfigurations). Owners who use data from previous retrofits to inform future planning can cut risk, improve consistency, and shorten delivery times even further.

By developing repeatable strategies and refining them from project to project, operators avoid starting from nothing each time. They also gain greater control over live project decisions. If a delay does occur, general contractors are better supported by owners

who can provide data-driven options, rather than relying on intuition.

RESPONDING TO MOUNTING DELIVERY PRESSURE

As retrofit projects and upgrades are becoming more common, demand for data centre capacity is growing rapidly. Meanwhile, constraints around grid connections, skilled labour, and equipment sourcing continue to intensify. Projects cannot afford to fall behind without risking serious financial costs.

In this environment, a reactive approach to project change is no longer enough. Owners need to anticipate disruption, evaluate multiple solutions, and ensure all approaches are flexible. Generative scheduling platforms like ALICE Technologies are making this possible, helping teams stay ahead of potential setbacks and keep revenue flowing. Most teams understand how to deliver a project. The challenge now is how to adapt that delivery process midstream, without losing time or profitability.

DATA CENTRES NEED EFFICIENT COOLING

TO MEET THE NEXT GENERATION OF CUSTOMER

DEMAND

Hans Obermillacher, Manager Business Development at Panduit, outlines why cooling has become a defining architectural constraint in modern data centres.

As the digital economy accelerates, data centres have become the physical backbone of modern society. From AI training clusters and high-performance computing (HPC) to cloud platforms and edge deployments, demand for compute continues to grow at pace. Yet, alongside this growth, the industry faces an equally pressing challenge: how to manage heat efficiently, sustainably, and at scale.

Cooling is no longer a secondary consideration in data centre design; it is now a defining architectural constraint, one that directly influences space planning, infrastructure selection, energy efficiency, and long-term operational viability. To meet customer expectations for performance, availability, and sustainability, data centres must be designed with sufficient space and flexibility to support both advanced air-based and emerging liquid cooling technologies.

SUSTAINABILITY AND SCALE

Sustainability has become a strategic imperative across the data centre lifecycle, from design and construction through to operation and eventual decommissioning.

In the UK alone, investment in data centre infrastructure is forecast to reach approximately $18.24 billion (£13.3 billion) by 2026, according to Ceres Property, reflecting the sector’s rapid expansion alongside mounting regulatory and environmental scrutiny.

Across Europe, data centre operators are working within increasingly stringent frameworks governing energy efficiency, carbon reduction, space utilisation, and thermal performance. Voluntary initiatives such as Climate Neutral Data Centre Pact commitments, alongside regulatory drivers and customer ESG expectations, are reshaping how facilities are specified and built.

In response, global infrastructure providers continue to re-engineer physical infrastructure systems to support a wide range of deployment models, from edge and enterprise facilities to hyperscale and colocation environments. The emphasis is shifting towards scalable, adaptable infrastructure that can evolve alongside rising power densities and changing cooling requirements.

THE CONVERGENCE OF COMPUTE, CABLING, AND CONTAINMENT

Modern data centres must integrate compute hardware, power delivery, network connectivity, and cooling into a unified physical infrastructure strategy. The technical drivers are well understood: rapidly increasing bandwidth requirements (800G and 1.6T), ultra-low latency fabrics (such as InfiniBand) for AI workloads, the protection of high-performance transmission media, and, critically, effective airflow and coolant management. To achieve this, physical infrastructure can be viewed across three interdependent architectural layers.

The compute frame consists of rack enclosures housing servers, storage platforms, and network switches. Today’s high-density racks, increasingly extending to 52U, must support heavier loads, higher thermal output, and, in many cases, liquid cooling interfaces. Structural integrity and compatibility with containment systems are essential, particularly as rack power densities exceed 30–50 kW in AI and HPC deployments.

The cabling pathway ensures that fibre and copper cabling remains organised, protected, and accessible throughout the facility. Maintaining minimum bend radii, preventing mechanical stress, and supporting high cable volumes are essential to preserving signal integrity at high data rates. Efficient cable selection can reduce bundle sizes, free up pathway space, and improve airflow, helping to mitigate heat build-up within cabinets and containment zones.

The containment and thermal environment is where cooling efficiency is ultimately

determined. Hot and cold aisle containment systems physically separate supply and exhaust air, preventing recirculation and ensuring cooling systems operate as designed. Well-implemented containment improves temperature consistency, reduces fan energy consumption, and lowers overall cooling demand.

AIR COOLING REMAINS DOMINANT

Air cooling remains the dominant cooling method across the global data centre installed base. Its familiarity, relative simplicity, and compatibility with traditional IT layouts continue to make it the default choice for many operators.

However, the rapid growth of AI and HPC workloads is pushing air cooling towards its practical limits. According to the International Energy Agency, data centres in advanced economies already consume between 1% and 1.5% of the world’s electricity use, with cooling representing a significant share of that consumption.

As rack densities increase, the volume of air required to remove heat rises exponentially, demanding more space for airflow, larger containment structures, and higher fan power. At a certain point, air alone becomes inefficient, both technically and environmentally.

MAKING SPACE FOR LIQUID COOLING

Liquid cooling is increasingly being adopted as a high-efficiency alternative (or complement) to air-based systems. Technologies such as direct-to-chip liquid cooling enable heat to be removed closer to the source, using far less energy and water than traditional air handling methods.

Industry research from organisations such as ASHRAE and the Uptime Institute highlights that liquid cooling can deliver substantial reductions in cooling energy consumption while enabling much higher rack power densities. However, its successful deployment depends on physical infrastructure that has been designed or retrofitted to accommodate it.

This is where space becomes critical. Liquid-cooled environments require room for additional pipework, manifolds, heat exchangers, and service access, as well as clear separation between wet and dry zones. Cabinets, containment, and pathways must all be engineered to support hybrid cooling models, allowing air and liquid systems to coexist as workloads evolve.

SUSTAINABILITY BEYOND COOLING

While cooling efficiency is a major sustainability lever, it is only one part of a broader environmental responsibility. Infrastructure design choices also influence material usage, logistics emissions, and end-of-life outcomes.

Smaller-diameter, high-performance cabling reduces raw material consumption and

allows more product to be shipped per pallet, lowering transport emissions. Preconfigured infrastructure systems simplify installation, reduce packaging waste, and minimise on-site labour and disruption.

Packaging innovation is also making a measurable difference. The removal of single-use plastics, the adoption of FSC-certified paper labelling, and the shift towards digital documentation all contribute to waste reduction across the supply chain. Environmental Product Declarations (EPDs) increasingly support customer sustainability targets, including LEED and BREEAM accreditation.

DESIGNING FOR WHAT COMES NEXT

Data centres are no longer static facilities; they are dynamic environments that must adapt to new technologies, higher power densities, and changing customer expectations, often within the same physical footprint. Making space for both air and liquid cooling is not simply a technical requirement; it is a strategic design decision that underpins long-term resilience and sustainability.

As the backbone of the data-driven world, data centres must continue to demonstrate leadership in energy efficiency, material responsibility, and operational transparency. By investing in flexible physical infrastructure, one that recognises the spatial realities of modern cooling, operators can meet today’s demands while remaining ready for what comes next. Panduit, panduit.com

ARBITRATING DATA CENTRE DISPUTES IN SOUTHEAST ASIA: THE RISE OF SIAC

As Southeast Asia accelerates data centre development, Charles Russell Speechlys’ Henry Winter (Partner, Singapore) and Abdul Azeem (Legal Director, Dubai) explore where construction, reliability, and ESG disputes are emerging, and what operators can do to protect value in a fast-moving market.

The AI boom is transforming Southeast Asia’s digital economy – and it is doing so at speed. Live capacity in ASEAN is forecast to reach roughly 30GW within the next two to three years, led by surging demand in Singapore, Indonesia, Vietnam, Malaysia, and Thailand. Governments are pushing hard on digital transformation and energy efficiency, while operators and investors race to deploy capital. The result is an unprecedented build-out of data centre infrastructure across the region.

With this growth comes friction. Data centres are among the most complex, capital-intensive assets to deliver and operate. Projects involve cross-border stakeholders and sensitive technology under compressed timelines, strained supply chains, and rapidly shifting regulations. Power-hungry data centres used to train

and run AI models may test the green energy transition, as energy demand sharply rises and the development of new renewable energy projects to power new initiatives struggles to keep pace. These dynamics create fertile ground for disputes that demand swift resolution, technical sophistication, and confidentiality. For many market participants, arbitration at the Singapore International Arbitration Centre (SIAC) has become the dispute resolution forum of choice.

WHERE WE SEE DISPUTES EMERGING

Disputes are no longer confined to conventional construction claims; they increasingly sit at the intersection of engineering performance, operational resilience, and ESG commitments, with direct revenue and reputational consequences.

Construction and delay risk remain the constant. Aggressive project schedules and liquidated damages clauses that can reach tens of thousands of dollars per day collide with on-the-ground realities: scarcity of experienced local contractors, power and water availability constraints, and bottlenecks in procuring transformers, switchgear, and advanced cooling systems. In a race-to-delivery environment, change management, interface risk, and design responsibility are frequent flashpoints that require careful contract drafting.

Infrastructure reliability is under the microscope. Booming demand for AI cloud services is driving the construction of power-hungry data centres, and the sprawling campuses that hyperscalers like Amazon, Google, Meta, and Microsoft are building will either syphon power from the grid or generate power on-site. Variability in grid stability across Southeast Asia heightens the importance of backup systems, redundancy, and clearly drafted service levels.

Meticulous records of any changes should also be kept. When technical disputes arise –particularly involving complex mechanical and electrical systems – arbitration is well-suited to adjudicate the issues due to the parties’ ability to nominate specialist arbitrators. Parties can appoint arbitrators and experts who understand tropical cooling strategies, redundancy tiers, PUE metrics, smart grid interconnection, and AI workload power densities – accelerating the hearing and improving the quality of the outcome.

ESG and regulatory compliance disputes are accelerating. While many new data centres are pivoting towards renewable energy generation, they still place significant strain on the grid. Water usage is an even more serious problem, as data centres use more water than they replenish, even if alternatives such as air-cooled chillers are used to stop equipment from overheating. We are already seeing three categories of ESG disputes: allegations of greenwashing, where sustainability claims diverge from operational reality; breaches of green covenants when PUE/WUE thresholds

in service level or investment agreements are missed; and cost-sharing battles over retrofits or design changes required to keep pace with evolving regulation. As ESG clauses proliferate, they are becoming a core battleground in high-value arbitrations.

SIAC: THE REGIONAL FORUM OF CHOICE

SIAC has cemented its position as Asia’s leading arbitral institution for complex, cross-border disputes, and its rule set maps closely to the needs of the data centre sector:

• Emergency arbitrator relief is available for urgent situations, including preserving the status quo when a suspension or critical shutdown is threatened. In appropriate cases, applications may be heard on an expedited or without-notice basis, enabling parties to protect assets and service continuity.

• Expedited procedures allow faster resolution of time-sensitive, lower-value claims without sacrificing procedural fairness, helping to keep projects on track.

• Preliminary determination empowers tribunals to make final and binding decisions on discrete issues at an early stage, narrowing the scope of the dispute and reducing time and cost.

• Joinder and consolidation mechanisms are essential in multi-party environments involving owners, EPC contractors, M&E specialists, and critical suppliers, allowing related disputes to be heard together for procedural efficiency and consistent outcomes.

PRACTICAL TAKEAWAYS FOR OPERATORS, DEVELOPERS, AND INVESTORS

Southeast Asia’s data centre cycle will reward those who deliver fast, sustainably, and at scale:

• Sophisticated front-end drafting and disciplined project management are the best dispute-avoidance tools. Contracts should translate high-level performance concepts into measurable technical parameters, with clear

allocation of design responsibility and change control. Service levels and ESG commitments must be realistic, data-backed, and aligned to grid realities, water constraints, and thermal management strategies, with balanced remedies and carve-outs for force majeure and regulatory change.

• Evidence wins technical disputes. Maintain rigorous records of design basis assumptions, commissioning data, PUE/WUE telemetry, load ramp profiles, and grid interaction events. When disputes arise, early engagement of independent experts and prompt preservation of operational data can be decisive.

• Choose dispute resolution provisions that fit the asset. SIAC arbitration offers clients a pragmatic route through complex disagreements, while keeping mission-critical infrastructure online, capital protected, and strategic plans on course.

Charles Russell Speechlys, charlesrussellspeechlys.com

HIGHER LOADS, FASTER BUILDS: SIX TRENDS DEFINING

DATA CENTRES IN 2026

Engineers from Black & White Engineering examine the technical and operational shifts, from AI-driven design to sustainability-led engineering, that are reshaping data centre delivery as densities rise and power access tightens.

As AI and high-performance computing drive demand, data centre design is evolving at pace. High-density racks are now standard, cooling systems are being redesigned in months rather than years, and projects are growing in both scale and complexity across all regions.

The direction of travel is clear: higher densities, tighter integration between disciplines, and faster delivery under increasingly constrained conditions. In this article, the team at global engineering design consultancy Black & White Engineering outlines six trends shaping data centre delivery in 2026.

1. LIQUID COOLING BECOMES PART OF STANDARD DESIGN

Rather than a specialist solution, liquid cooling is now a mainstream consideration on new data centre projects. As rack densities exceed the limits of air cooling, direct-to-chip and rack-level

liquid systems are moving from trials to planned deployment across full facilities.

The focus in 2026 will be on standardisation. Controls, safety systems, and maintenance processes must work consistently across mixed cooling environments and different service-level requirements. Performance will depend less on the cooling method itself than on how well it integrates with power, monitoring, and operations.

2. MANAGING THE REALITIES OF EXTREME RACK DENSITIES

Design requests above 100–200 kW per rack are becoming more common, but they are not a default position. Facilities are increasingly designed for modest average densities while retaining the ability to support extreme loads where required.

This is forcing changes across mechanical and electrical architecture, structural design, and cabling strategies to manage added weight, heat, and complexity. Commissioning is more demanding, redundancy strategies are under pressure, and operational tolerances are narrower.

The challenge for 2026 is maintainability at scale: delivering facilities that can support high densities while remaining serviceable, adaptable, and safe across a range of operating conditions.

3. FROM ONE-OFF BUILDS TO INDUSTRIAL-SCALE DELIVERY

Data centre development has shifted into an industrial phase. AI demand, hyperscale consolidation, and multi-site campus strategies are driving projects of a scale comparable to transport or utilities infrastructure.

Buildings that once delivered 4–12 MW are now components within multi-hundredmegawatt campuses, delivered in phases. This scale requires repeatable design, predictable cost, and reliable sequencing, rather than bespoke engineering.

Engineering teams are responding by productising core elements such as power systems, cooling plants, and white space, and capital markets are accelerating this shift. Alongside hyperscalers, infrastructure investors and pension funds now expect compressed schedules, standardised designs, and reduced delivery risk.

As a result, engineering focus is moving beyond drawings to supply-chain integration, modularisation, and factory-led manufacture. Digital engineering, configuration tools, and parametric MEP systems are becoming essential to support global, repeatable delivery.

4. POWER INNOVATION UNDER GRID CONSTRAINTS

Across all major regions, access to power has become the primary constraint on new capacity. Developers are engaging utilities earlier and exploring on-site generation as both a bridge and a buffer. Some facilities are also being designed to support the grid through energy storage and demand-side response.

Gas turbines and reciprocating engines remain the most practical near-term options. Hydrogen-ready systems and small modular nuclear are progressing through feasibility but remain non-standard. Former power station sites are also being reconsidered for co-located generation, including potential future nuclear use.

As noted in the IEEE’s Data Center Growth and Grid Readiness report (May 2025), the scale of expansion is placing data centres in the same category as major generation assets in terms of grid impact and regulation. Engineering, therefore, extends beyond the site boundary to grid interaction, protection coordination, and system stability.

Electrical systems increasingly need to ride through faults, exchange data with utility infrastructure, and operate as controllable assets rather than passive loads. Resilience is now measured in flexibility as well as uptime.

5. AI-DRIVEN OPERATIONS AND DESIGN

AI is becoming embedded across the data centre lifecycle. In design, automated BIM and internal tools are improving efficiency. In operation, machine learning models are already optimising airflow, pumping, and power distribution.

The next stage is system-wide integration. Converged platforms and live digital twins will allow continuous testing, optimisation, and forecasting of facility performance. This is changing operational skill requirements, with greater emphasis on data management, model oversight, and system integration.

Equipment suppliers are also embedding AI and IP-enabled interfaces into cooling, power, and control systems, feeding real-time data into central platforms. Facilities with high-quality, consistent data will be better placed to manage performance, maintenance, and reporting.

6. SUSTAINABILITY BECOMES A CENTRAL DESIGN DISCIPLINE

Sustainability is now a core design constraint rather than an overlay. High-density workloads are exposing trade-offs between energy and water efficiency, particularly where lower PUE can increase water use.

Design decisions now routinely consider embodied carbon, materials, water stewardship, and heat reuse alongside operational efficiency. Approaches include modular construction, alternative structural materials, on-site generation, and improved heat-recovery strategies, with liquid cooling supporting higher efficiency in some applications.

Regulatory and investor scrutiny continues to increase, particularly in Europe. Projects that can demonstrate measurable progress on energy, water, and carbon performance are better positioned to secure planning consent and funding.

OUTLOOK FOR 2026

Data centre development is being shaped as much by physical limits as by capital. Power and cooling demands are driving rapid technical change, while industrial-scale investment is redefining how facilities are designed and delivered.

Reliability remains essential. In 2026, adaptability will be just as critical – namely, the ability to respond to density, pace, and the expectations of a global infrastructure market.

Black & White Engineering, bw-engineering.com

THE DATA CENTRE SUPERCYCLE: STRATEGIC IMPERATIVES FOR A RAPIDLY EVOLVING LANDSCAPE

Daniel Thorpe, Head of Data Centre Research at JLL, assesses how accelerating AI adoption is reshaping data centre demand, investment, and innovation in 2026.

As we kick off the year, the existential impact of technology continues to be a key focus for individuals, corporations, and governments. While rapidly developing technology and its potential impact dominates the conversation, it is the infrastructure supporting the technology which will determine the speed and success of the changes.

Demand for tech infrastructure is growing at an exponential rate, necessitating significant investment and innovation. At the heart of this are data centres, which are experiencing an investment supercycle requiring up to $3 trillion (£2.19 trillion) of investment by 2030. Nearly 100 GW of new data centres will be added between 2026 and 2030, doubling global capacity. The scale of this growth

creates significant challenges. However, these challenges also have the potential to create opportunities for investors and corporations alike, which, if capitalised on, could have a profound impact across industries.

AI is at the heart of almost all conversations about transformative technology; this is particularly true of data centres. As AI is implemented more into the day-to-day operations of organisations and individuals, a challenge emerges. In 2025, AI represented 23% of global data centre workload; by 2030, JLL estimates it could more than double to represent 50%. This is on top of the traditional data centre workloads, which have been steadily increasing for the last 10 years.

THE FIRST CHALLENGE: POWER

A major challenge that comes with this is the demand for power. The average wait time for a grid connection in primary data centre markets exceeds four years. In Frankfurt, London, and Paris, the average wait is seven years – and in Amsterdam it’s 10. This represents a very significant delay for organisations already anxious not to fall behind technological development.

As a result, it is expected that data centre operators will increase behind-the-meter power arrangements and explore colocated battery storage to combat these delays. For corporates, this means being mindful that power – over location or cost – will be the primary criterion for selecting sites, and that securing power early is a key consideration when making real estate decisions.

Construction delays will also continue to affect timelines into the second half of the decade. Over half of projects faced delays in 2025, so not considering the source of power could put more obstacles in an already turbulent path. For investors, the challenge of power demand shifts development risk. Sites without secured power are at significant risk of being stranded, regardless of zoning or demand.

THE SECOND CHALLENGE: COST

Another consideration is cost. The rapid expansion of the industry is creating extended lead times, limited availability of skilled trades, and escalating development costs. From 2020 to 2025, the average global data centre construction cost increased from $7.7 to $10.7 million (£5.6 to £7.8 million) per MW, equating to 7% CAGR. For 2026, JLL has forecast it will increase by a further 6% to $11.3 million (£9 million) per MW.

This change is facilitating a shift in how construction is approached. By 2030, annual sales of modular systems and micro data centres could reach $48 billion (£35.2 billion). This marks a shift from build-tosuit development towards assemble-atscale. This trend accelerates deployment and enables geographic expansion, creating significant opportunities for manufacturers and developers who can generate these solutions.

Cost also means it is pertinent to evaluate different approaches to developing long-term infrastructure capacity plans. Hybrid portfolios, where multiple types of infrastructure are used, will likely become standard practice in the future. One example of their efficiency comes when considering AI workloads.

In 2025, 60% of AI’s workload was represented by AI training. By 2030, this proportion is expected to shrink to 26%, with AI inference taking up the rest. While AI training demands ultra-high-density, specialised GPU clusters and advanced cooling, AI inference requires geographical distribution to reduce latency and serve users effectively. Completely offloading this workload carries a significant risk of compromising data security. Adopting a hybrid approach means reducing the need for specialised hardware, optimising performance, and improving a digital infrastructure’s flexibility, all while maintaining data resilience and compliance.

FLEXIBILITY IS KEY

Major industry shifts pose significant challenges for companies and investors, but they also create opportunities that can be capitalised on through innovative thinking, pragmatic prioritisation, and a focus on optimisation. For corporates, securing power early becomes a key priority. Multi-year waits for grid connections can delay an already obstructed construction timeline. This necessitates parallel engagement with utilities and behind-the-meter generation partners alongside real estate decisions.

For investors, access to power can have a big impact on development risk, and power access is becoming concentrated among fewer players, raising the barrier to entry.

Taking a flexible approach to data infrastructure is also key. Data centre demands continue to shift, and hybrid portfolios are becoming the default option for improving optimisation and lowering cost, all while maintaining data security.

Investors should prioritise AI-capable and retrofit-ready assets, specifically facilities capable of supporting liquid cooling and higher power densities per rack. With AI workloads set to reach 50% of all data centre workloads by 2030, assets not capable of handling these workloads will miss out on this segment of growth.

Success in this transformative period will hinge on strategic foresight, adaptability, and an openness to flexible solutions. The data centre supercycle will be a profoundly transformative time for all industries, and it will be through relying on the above principles that these challenges will succumb to opportunity.

23-24 SEPTEMBER 2026 NEC BIRMINGHAM, UK

A GAME CHANGING EVENT FOR THE ELECTRICAL CONTRACTING INDUSTRY

Covering everything from Lighting and Wiring Accessories to EV and Renewables.

ECN Live! 2026 is an industry game changing trade show for electrical contractors, electricians, buyers, specifiers, and key stakeholders in the electrical contracting sector. Gain valuable insights into the most valuable aspects of electrical contracting, such as lighting, circuit protection, cable management, HVAC, fire safety, EV, renewables, solar and more! ECN Live! will be co-locating with EI Live!, the UK’s only dedicated smart home show, an already established show, now in its 15th year. Given the synergy between these two markets, electrical contractors and electricians will gain valuable knowledge about the world of smart home integration and add another string to their bow as an installer. Don’t miss this innovative co-located event that defines the standard for connection, knowledge sharing, and solutions tailored for today’s electrical professionals.

BUILDING RESILIENCE FROM THE GROUND UP: WHY DC-POWERED LIGHTING MATTERS FOR MODERN DATA CENTRES

Ton van de Wiel, Global Segment Manager, Intelligent Buildings at Signify, outlines why DC-powered LED lighting is emerging as a key consideration in making data centre infrastructure more efficient and resilient.

The digital services that underpin modern economies – from media streaming to cloud computing – depend on a rapidly expanding global network of data centres. These facilities are not only critical to digital connectivity; they represent significant sources of employment, infrastructure investment, and tax revenue through construction and long-term operation.

Today, data centre operators face a convergence of challenges. Capacity requirements are accelerating due to AI-driven workloads, energy prices are rising, and expectations around sustainability and carbon reduction are becoming more stringent. In response, the industry is re-examining its electrical infrastructure. Direct current (DC) power architectures, once limited to niche applications, are gaining traction as a foundation for higher efficiency and greater operational resilience.

Within this shift, lighting – often treated as a peripheral system – can play a strategic role. DC-powered LED lighting combines high energy

efficiency with relatively low implementation risk, making it an effective starting point for broader DC adoption. Beyond energy savings, lighting can also function as an intelligent layer within next-generation data centre infrastructure.

HOW POWER ARCHITECTURES ARE CHANGING

Operating a data centre requires tight coordination between IT equipment, networking, cooling, security, and electrical distribution. Historically, alternating current (AC) has been the default for power distribution. However, as facilities’ scale and power densities increase, electrical efficiency has become a primary design concern.

Early facilities relied on 48V DC for backup systems – safe but capacity-constrained. This gave way to 230/277V AC distribution, followed by 380V DC for internal systems. Today, the extreme power demands of AI servers are driving another transition towards 650V DC and even 800V DC architectures.

According to the Open Direct Current Alliance (ODCA), 650V DC represents the optimal level for building-wide distribution, balancing efficiency with safety, while organisations such as NVIDIA and the Open Compute Project are investigating 800V DC. Despite promising high-power IT loads, these higher voltages do not yet deliver the same system-wide efficiency benefits as a facility-level 650V DC approach.

Outside the data centre sector, industrial sites are already deploying 650V DC systems to improve energy efficiency and resilience. One key advantage is the ability to capture regenerative energy from motor drives and robotics – energy that would otherwise be dissipated as heat. Because lighting is a continuous base load, it can readily absorb this recovered energy, reducing grid dependency and operating costs.

Integrating lighting, motors, renewables, and storage on a shared DC grid reduces conversion losses, cuts copper usage through fewer conductors, and lowers transmission losses compared with 400V AC systems. When paired with solar PV and batteries, DC grids also improve self-consumption, backup capability, and flexible energy management.

WHAT’S DRIVING THE MOVE?

The momentum behind DC power in data centres is rooted in both engineering logic and economics:

• Lower conversion losses — Conventional AC systems require multiple conversion steps, resulting in energy losses of up to 18%.

• Alignment with IT equipment — Servers and GPUs operate natively on DC power.

• Simpler renewable integration — Solar panels and battery systems produce DC, enabling more efficient connections.

• Reduced system complexity — Fewer transformers and rectifiers mean simpler installation and improved reliability.

• Preparedness for AI growth — Rising AI workloads are accelerating the shift towards DC-based power systems.

DC power is therefore not just an alternative distribution method, but a pathway to smarter, more resilient infrastructure.

LIGHTING AS THE FIRST STEP

Among all building systems, lighting is often the most practical candidate for early DC adoption. Connected LED lighting allows operators to pilot DC distribution with limited risk before extending it to mission-critical IT loads. The benefits are tangible:

• Capital expenditure savings — DC lighting cables reduce copper use by 40%. Three-conductor DC cables (L+, L-, PE) can transmit the same power as five-conductor 400V three-phase AC cables.

• Operational cost reductions — With only two current-carrying conductors, DC lighting avoids approximately 33% of cable losses compared with three-phase AC at the same current.

• Improved resilience — DC lighting can operate directly from on-site solar generation or battery storage, strengthening microgrid performance during outages.

DC-compatible luminaires and components are already commercially available. For example, Signify offers a 100W Xitanium LED driver designed for 620–750V DC operation, integrated into the Pacific LED Gen5 and Maxos Fusion luminaire families. These solutions achieve up to 165lm/W efficacy and can be paired with systems such as Signify Interact and Philips Dynalite. Driver-level efficiency can exceed 95%, with future potential to reach 200lm/W through ultra-high-efficiency LED modules.

SUSTAINABILITY AND ESG OUTCOMES

DC-powered lighting supports measurable sustainability objectives:

• Lower carbon emissions through reduced conversion losses and material usage

• Support for certifications such as LEED Zero and BREEAM

• Energy optimisation with connected lighting systems, cutting lighting energy use by up to 75%

For hyperscalers like Amazon Web Services and Microsoft Azure, as well as colocation providers, these outcomes translate directly into stronger ESG reporting and progress towards carbon neutrality.

DC lighting can also be implemented incrementally. Some facilities deploy rack-level DC lighting while retaining an AC backbone.

Others adopt facility-wide DC grids that integrate lighting, renewables, storage, and IT infrastructure.

In larger deployments, centralised emergency lighting connected to the DC backbone ensures continuous illumination during outages, reinforcing safety in mission-critical spaces.

A STRATEGIC ROLE FOR LIGHTING

As operators prepare for the next phase of digital expansion, DC-powered lighting offers a practical, high-impact entry point into efficient, renewable-ready DC infrastructure.

Modern connected lighting systems extend far beyond illumination. With embedded sensors measuring occupancy, daylight, temperature, humidity, and air quality, luminaires form a dense, facility-wide sensing network without the need for additional hardware. Using open protocols such as DALI, BACnet, and MQTT, DC lighting networks integrate with building management systems and DCIM platforms, enabling predictive maintenance, enhanced operational intelligence, and optimised cooling and space utilisation.

By simplifying cabling, reducing losses, and enabling intelligent energy management, DC lighting transforms illumination from a passive load into an active contributor to resilient, sustainable data centre operations.

Signify, signify.com

DATA CENTRES IN 2026

BUILDING THE FUTURE, FACING THE NEIGHBOURS: GIGAWATT

As AI leads to larger data centres, Tate Cantrell, Global CTO at Verne, warns design- and construction-phase decisions on power and cooling may shape what communities allow.

The data centre industry may soon confront a deeper question than how to prepare for hyper-dense, gigawatt-scale data centres in 2026, namely: whether the social licence to build such facilities is being withdrawn. The construction of hyperscale campuses risks colliding with a public whose perception of power and water consumption, in particular, is increasingly negative. If that happens, a gap may open between what designers can achieve with the latest technology and what communities are willing to allow in practice. The sector has entered an era of scale that would have seemed implausible only a few years ago. Gigawatt-class sites are being built to train and deploy AI models for the next generation of online services, but their impact extends beyond the data centre industry. The communities that host these AI factories are being transformed too.

Physically, these are engineered landscapes: industrial campuses spanning hundreds of acres, integrating data halls with substations, power distribution, and cooling infrastructure. As these sites grow in visibility, public awareness of the resources they consume is growing too.

Power is one area receiving attention. Data centre growth is coinciding with both tighter energy markets and the perception that hyperscale operators are competing for scarce grid capacity or diverting renewable power that might support local decarbonisation. There is no shortage of coverage suggesting that data centres are pushing up energy prices, too. The perception has already had consequences. In the United Kingdom, for example, a proposed 90-megawatt facility in Buckinghamshire was challenged in 2025 by campaigners warning that residents and businesses would be forced to compete with a “power-guzzling behemoth” for electricity. In Belgium, grid operator Elia

may limit the power allocated to operators to protect other industrial users.

Water has become another focal point. Training and inference models rely on concentrated clusters of GPUs with rack densities that exceed 100 kW. The heat generated goes beyond the limits of air-based cooling and has driven the shift towards liquid systems, which are more efficient and better suited to high-density workloads. Yet, “liquid cooling” is often interpreted by the public as “water cooling”, feeding a perception that data centres are draining lakes, rivers, and underground sources to cool servers and then discarding the water. In practice, that is rarely the case. Many operators use non-water coolants and closed-loop systems designed to conserve resources, but perception remains a problem.

TACKLING THE MISCONCEPTIONS

Addressing these concerns will require operators to re-think their role within communities. They must demonstrate that growth is supported by grids capable of meeting demand without destabilising supply

or increasing costs. They need to show greater density can be achieved without a disproportionate environmental impact. How far these outcomes can be delivered is shaped during design and construction.

In the case of power, engineers and planners must determine how a facility’s electrical architecture connects to the grid, the way substations and distribution systems are configured for predictable demand, and the degree of flexibility that can be designed into power draw. When these considerations are embedded into the supply infrastructure, data centres can scale compute capacity while maintaining stability and reliability in local grids.

Cooling takes foresight, too. The most effective route to sustainable cooling for highdensity racks is to integrate liquid cooling into the facility’s construction. Liquid cooling shapes the physical design of a data centre; piping routes, floor loading, structural requirements, and heat extraction influence decisions on building shell and mechanical spaces. Planning for this early means operators can manage heat efficiently and sustainably during

operations. The addition of cooling-ready pipework and capped connections positions the facility for sequenced growth.

That alignment will be tested as another wave of high-density deployments arrives in 2026. Based on NVIDIA’s product roadmap, we have a sense of what’s coming: each generation of hardware delivers more power and heat, requiring more advanced infrastructure. NVIDIA’s Chief Executive, Jensen Huang, introduced the DSX data centre architecture at GTC 2025 in Washington, D.C. – a framework designed to make it easier for developers to deploy large-scale, AI-ready facilities. We should expect other silicon providers to follow suit.

One consequence will be a stronger push towards standardisation across the supply chain. Companies such as Vertiv, Schneider Electric, and Eaton are aligning around modular power and cooling systems that can be integrated easily. Nvidia, AMD, and Qualcomm, meanwhile, have every incentive to encourage that standardisation. The faster infrastructure is deployed, the faster they can deliver the compute required using their chips. Standardisation will simplify delivery and accelerate time-to-deployment.

EFFICIENCY AND EXPANSION

Behind all of this sits the transformer model. Built around attention mechanisms that help the model identify and reuse relevant context across the input, it has become the workhorse behind today’s generative AI for language, code, and other complex data at scale. The downside is energy intensity, both in training and in the per-token cost of inference. I predict 2026 will bring step-change improvements in joules per token. Those gains are more likely to come from compounding algorithmic and systems optimisations rather than from a new, single architecture.

The technical roadmap for 2026 is relatively clear: denser racks, wider adoption of liquid cooling, and greater standardisation. The social roadmap is less certain. If 2025 was the year of scale, 2026 may be the year that scale is tested against public tolerance. Operating at gigawatt scale will depend not only on vision and investment, but on the foresight applied during design and construction. Done right, we won’t just be building sustainability and efficiency from the ground up; we will be building community trust, too.

Verne, verneglobal.com

THE SILICONE ADVANTAGE: WHY NEXT-GEN DATA CENTRE CONSTRUCTION STARTS AT THE ROOF

Errol Bull, Application Development Leader at Momentive Performance Materials, argues the case for reflective silicone coating as the preferred choice of roofing material for the data centres of tomorrow.

In the high-stakes world of data centre construction, the “envelope” is often overshadowed by the “engine”. While engineers and developers naturally focus on server density, liquid cooling loops, and power redundancy, the physical shell of the facility remains the first line of defence against the elements. Despite being frequently overlooked, the building envelope can have a significant impact on operational efficiency.

As the industry moves towards a landscape dominated by AI-ready infrastructure and increasingly dense computer racks, every choice matters when it comes to improving thermal management. That’s especially true when it comes to roofing materials. Among the roofing choices available, reflective silicone roof coatings address three of the primary challenges in modern facility management: cooling costs, risk reduction, and asset longevity.

SOLVING THE ENERGY EQUATION

Cooling is often one of the largest operating expenses for data centres, sometimes accounting for up to 40% of a facility’s total energy consumption. In the push for a lower power usage effectiveness (PUE) rating, construction teams often look inward by optimising airflow or upgrading HVAC units. One effective way to manage heat gain is to help prevent it from entering the building in the first place.

Dark roofing materials can act as massive heat sinks, absorbing solar radiation and transferring that thermal load directly into the building structure. Reflective silicone coatings are designed to mitigate this. By creating a “cool roof” environment, these coatings may reduce cooling energy expenses by an estimated 10–50%, depending on the environment. This can potentially translate to a direct 7–15% reduction in annual overall cooling costs. From a construction standpoint, this helps more than just the electricity bill. By lowering the ambient thermal load, silicone coatings can help reduce the strain on HVAC systems. This could allow developers to downsize cooling infrastructure during construction or, for existing facilities, potentially extend the lifespan of equipment – thereby deferring significant capital expenditure.

PROTECTING MISSION-CRITICAL ASSETS

In a data centre, roof failure is more than a maintenance issue; it is a significant business risk. Water leaks above a server hall can lead to downtime that costs a huge amount of lost time, money, and trust.

Liquid-applied silicone technology provides a seamless membrane, creating a robust and durable barrier against water ingress. Because it is applied as a liquid, it can fill cracks and cover seams, creating a continuous water barrier. Applied over the roofing substrate, this offers a “belt and braces” approach to protecting the building, ensuring the envelope is as resilient as the digital infrastructure it protects.

THE SOLAR SYNERGY

As data centres increasingly integrate on-site solar panels, a new challenge has emerged: the lifespan gap. Solar panels typically have a service life of 25 to 30 years. However, many traditional roofing membranes may require major maintenance or replacement every 7–10 years. This creates a logistical challenge where functioning solar arrays must be decommissioned and removed to repair the roof beneath them.

High-grade alkoxy silicone coatings can bridge this gap. They provide seamless protection that is designed to withstand the thermal shock and movement inherent in solar racking systems. When specified correctly, these coatings can offer a service life that

aligns with the lifespan of the solar panels, creating a unified, long-term asset lifecycle that simplifies facility management.

A FIT FOR ANY ROOF

Sustainability in data centre construction is often measured in carbon offsets, but circular construction – a key step towards building sustainability – begins with preserving existing structures. Traditional roof replacements can be disruptive, involving noisy “tear-offs” that create dust and vibration – two of the greatest enemies of sensitive server equipment.

Silicone coatings are compatible with almost all data centre roof types, including metal, single-ply, and bitumen. This enables a “restore rather than replace” approach. By avoiding a full tear-off, construction teams can save time and money while removing the risk of exposing the interior hardware during renovation. It represents a cleaner, quieter, and often more cost-effective way to achieve a “new” roof standard without disrupting uptime.

A STRATEGIC INVESTMENT

In an era where data centre capacity is being built at a record pace, there is a temptation to focus on speed over material selection.

However, the building envelope should be viewed as a performance-enhancing component of the facility and not just a roof over some very expensive circuitry.

A reflective silicone coating is a relatively small, strategic investment during construction or renovation. It provides long-term thermal performance, helps secure the building envelope against failure, and provides a clear pathway to a more sustainable PUE. As we design the next generation of data centres, silicone should be considered an essential design choice for a future-proofed facility.

Momentive, momentive.com

BENDBRIGHTXS 160 µm: A MAJOR ADVANCE IN NETWORK MINIATURISATION

The world’s first 160 µm bend-insensitive fibre - higher capacity, smaller footprint

Unlock record cable density for today’s space-constrained ducts, buildings, and data centres with the BendBright XS 160 µm single-mode.

By shrinking the coating while maintaining a 125 µm glass diameter and full G.652/G.657.A2 compliance, it delivers faster, more cost-effective deployments that don’t disrupt established field practices.

Enables slimmer, lighter, high-count cables that install faster and travel further when blown

Packs dramatically more fibres into the same pathway thanks to a >50% cross-sectional area reduction versus 250 µm fibres

Ensures full backward-compatibility with legacy single-mode fibres for seamless splicing and upgrades

Provides superior bend performance and mechanical reliability with Prysmian’s ColorLockXS coating system

Ideal for FTTH/X access, metro densification, and hyperscale /data centre interconnects where every millimetre counts

Ready to boost capacity and shrink your network footprint?

Connect with our experts, request samples, or download the BendBright XS 160 µm technical data today.

WHY REALITY CAPTURE REMAINS ESSENTIAL FOR THE NEXT GENERATION OF DATA CENTRES

Joao-Carlos Pereira Fialho, Pre-Sales Specialist at Cintoo, evaluates the role reality capture can play in keeping processes organised and on track during the construction and ongoing operation of data centres.

As AI adoption accelerates, data centres are storing and managing increasingly complex workloads. The rising demand for data centre capacity, which could more than triple by 2030 (according to McKinsey), and the ability for data centres to quickly scale up performance as data requirements increase presents a major opportunity for those building and operating these mission-critical facilities.

From specialist data centre contractors to owner-operators managing entire facilities over the long term, the same priorities persist: to deliver scalable, flexible, and reliable infrastructure. This is where reality capture technology becomes indispensable – not just for documentation, but to empower smart decision-making across the entire facility lifecycle.

KEEPING UP WITH THE DATA CENTRE BOOM

High-density racks, intricate cabling systems, advanced cooling infrastructure, and 24/7 uptime requirements demand absolute

precision in planning, construction, and maintenance. Reality capture provides a crucial foundation, capturing the built environment in high-resolution 3D for design validation, clash detection, quality assurance, and ongoing operational visibility. Whether laying out a new facility, retrofitting existing infrastructure, or reconfiguring for optimal performance, accurate data from laser scans is a key ingredient for success.

General contractors and specialist data centre construction firms can benefit from:

• Using scan-to-BIM workflows to validate construction against the design intent

• Creating precise, as-built documentation to enable rapid QA/QC checks and reduce costly rework

• Staging renovations or expansions without disrupting operations through virtual walkthroughs and remote coordination

Meanwhile, owner-operators managing assets long term can benefit from:

• Leveraging scan data for asset tracking, maintenance planning, and retrofitting strategies

• Feeding point cloud data into digital twins to enable spatial analytics and operational simulation, such as thermal modelling and airflow planning

• Keeping a digital record of infrastructure changes over time to inform future upgrades

SMARTER COOLING REQUIRES SMARTER DATA

With efficient thermal management vital for sustaining processing power, maintaining uptime, and prolonging equipment lifespans, high-resolution laser scans from reality capture devices, associated with a thermal sensor, can deliver precise data that reveals how cooling systems interact with the building, equipment, ducts, and more. This insight helps online teams to review and assess hot or cold points, identifying the root cause of space allocation issues and pinpointing areas of inefficient airflow or potential obstructions to cooling performance.

Reality capture also plays a strategic role in the design phase of a data centre. By feeding scan data of existing conditions into simulation software, engineers can create temperature-optimised layouts, determine the most effective placement for high-density racks, and ensure energy-efficient cooling from day one.

Integrate this data with real-time visualisation tools and it becomes even more powerful. In doing so, operators can reduce energy waste and operational expenditure while extending the lifespan of mission-critical systems.

SCALING WITH CERTAINTY

Often, when it comes to retrofitting or expanding data centres, only 2D plans (or out-of-date 3D models) of the facility exist. Legacy infrastructure may be poorly documented, with maintenance records missing and critical equipment unmonitored, introducing risk into the planning process that could be eliminated.

Reality capture fills this gap by delivering up-to-date 3D scans of the as-built environment, including hard-to-reach areas like MEP systems behind walls and ceilings or buried underground. Teams can remotely map out cable rerouting and equipment reconfigurations without disrupting current operations.

Furthermore, by integrating this scan data with a digital twin, operators can conduct clash detection and spatial validation during the design and pre-construction phases. Once construction is complete, this same digital twin becomes the go-to for ongoing monitoring, asset tracking, and facility optimisation. This results in greater planning accuracy and fewer on-site surprises, leading to a measurable reduction in rework and delays, as well as their associated costs.

With the cost of outages escalating, reality capture also plays a significant role in minimising the risk of these happening during renovations and upgrades. By providing accurate, high-resolution scans of the facility, teams can plan every stage of the project in detail before work physically begins.

Accessing reality capture data remotely facilitates virtual walkthroughs, allowing stakeholders to finalise logistics and validate plans without stepping foot on site. With this level of insight and coordination, renovation work can be strategically staged to maintain business continuity.

FROM SILOED DATA TO TEAM-WIDE VISIBILITY

While reality capture is powerful, the data produced is massive and teams don’t always know how to manage it or turn it into actionable insights. Unused or siloed data, accessible to only one individual at a time, represents a huge, missed opportunity.

Cloud-based platforms enable teams to access, visualise, and collaborate on reality

capture data remotely. With the ability to virtually explore a site, remote stakeholders – from facility managers and engineers to IT and construction crews – gain the same level of spatial awareness as those physically on the ground. This transparency improves communication, accelerates decision-making, and significantly reduces errors and costly rework.

On a recent data centre construction project, Bouygues Energy & Services (EQUANS) found that by comparing laser scan data from the site with its 3D models, it was able to avoid revisits and detect potential clashes early in the design process. The company also minimised travel time and improved collaboration by sharing this data with stakeholders remotely, enabling faster decision-making, saving time, and reducing costs. The ability to store this data, and add new as you go, provides a living blueprint which informs future upgrades and enhances operational planning.

REALITY CAPTURE’S ROLE IN FUTURE-PROOFING DATA CENTRES

For data centre operators, reality capture is a strategic asset which is proven, accessible, and easy to integrate. By delivering a data-rich view of existing infrastructure, reality capture enables smarter and faster decision-making across every stage of a facility’s lifecycle, enhancing planning accuracy, optimising operational efficiency, and reducing costly risks.

Cintoo, cintoo.com

RELIABLE BACKUP POWER SOLUTIONS FOR DATA CENTRES

THE 20M61 GENERATOR SET UP TO 6250 kVA BEST INDUSTRY LEAD TIME PRE-APPROVED UPTIME INSTITUTE RATINGS

ISO 8528-5 LOAD ACCEPTANCE PERFORMANCE

BAUDOUIN: SOLVING THE DESIGN DILEMMAS OF MODERN DATA CENTRES

Baudouin presents its dedicated data centre genset range, focusing on high power density, Tier-aligned reliability, shorter delivery times, and HVO-ready operation for lower-carbon backup generation.

In the global data centre landscape, power continuity is the ultimate benchmark of operational excellence. For consultants and design engineers, the challenge is three-fold: maximising power density, ensuring strict compliance with availability standards, and meeting increasingly aggressive deployment schedules.

For over a century, Baudouin has engineered power solutions for critical infrastructure. Today, it is redefining backup generation for the digital age with purpose-built gensets delivering up to 6250 kVA, specifically dedicated to data centre applications.

THE AVAILABILITY CHALLENGE: MOVING BEYOND G3

For a data centre designer, every millisecond counts. Grid fluctuations must never impact the

IT load, and transient response is the non-negotiable metric.

How does Baudouin guarantee optimal transient performance? Its gensets are engineered for exceptional voltage and frequency stability, even under massive load impacts. They don’t just meet ISO 8528-5 G3 requirements; they are pre-approved by the Uptime Institute. This significantly streamlines the certification process for Tier III and Tier IV installations, giving consultants the certainty that their infrastructure will meet the world’s strictest reliability criteria.

SPACE OPTIMISATION: MORE kVA, LESS FOOTPRINT

Floor space is a premium, high-cost resource in any site design. Engineers are constantly tasked with fitting more power into tighter envelopes.

How can we maximise kVA per square metre? Baudouin’s dedicated data centre range, covering 2000 to 6250 kVA, offers class-leading power density. By utilising intelligent electronic fuel injection and high-efficiency cooling architectures, it enables engineers to reduce the physical footprint of the engine room.

Furthermore, structural rigidity is ensured by a reinforced H-Beam chassis, while the NVH (noise, vibration, harshness) architecture is optimised to minimise acoustic impact and vibration, simplifying site integration in sensitive environments.

THE ‘TIME-TO-MARKET’ PRESSURE

Global supply chain bottlenecks are often the primary obstacle for consultants facing strict commissioning dates. A design is only as good as the equipment’s arrival date.

Can delivery lead times be reduced without sacrificing customisation? This is where Baudouin’s agility becomes a strategic advantage. Its production capability allows for short delivery times – under six months for specific configurations. Whether designing for single-unit deployments or multi-MW clusters, its systems remain flexible to adapt to specific site conditions or hybrid configurations.

FUTURE-PROOFING THROUGH SUSTAINABILITY

Decarbonisation is now a core requirement in every data centre RFP. To meet this mandate, its entire range is HVO-ready. This allows designers to specify renewable diesel solutions immediately, slashing the carbon footprint of the backup power system without any compromise on performance, durability, or warranty.

BUILT TO LAST

As data becomes the world’s most critical resource, the systems protecting it must be uncompromising. Baudouin’s century of mechanical expertise, combined with continuous innovation in power generation, delivers one clear promise to industry experts: reliability at the speed of data.

Baudouin, Built to Last.

Baudouin, baudouin.com

THE DATA CENTRE POWER CRUNCH AND THE NUCLEAR COMEBACK

Adhum Carter Wolde-Lule, Director at Prism Power, puts forward small modular reactors as a solution to the ongoing grid and power constraints caused by the rise of high-intensity data demand.

AI acceleration is driving exponential growth in data centre energy demand; global data infrastructure is expanding faster than grids can handle. Meanwhile, governments and utilities are realising that you can’t scale a digital economy without expanding out power infrastructure.

Data centre load surge means that hyperscale facilities now consume more than 80 MW each –enough to power a small city. The electrification of everything – from logistics to servers –intensifies the pressure: demand is outpacing traditional grid capacity, forcing utilities and corporations to rethink their organisations. While renewables are essential to decarbonisation, they’re weather dependent and not designed for 24/7 uptime. Data centres, AI clusters, and cloud platforms need firm, always-on energy, not intermittent supply. Enter small modular reactors (SMRs), which are clean, compact, and constant. SMRs offer steady, predictable baseload without

fossil fuels and they complement renewables, helping balance variability and strengthen grid resilience. This also supports corporate net zero targets whilst maintaining operational reliability, which is crucial for uptime-critical facilities.

Atop this, there exists the reality of grid connection delays. Even fully funded, consented data centre projects are often held up 18 to 36 months waiting for power. It’s becoming the defining constraint on digital infrastructure delivery: not finance or technology, but the time it takes to connect.

THE GROWING APPEAL OF SMRs AS A SOLUTION

This context helps explain why co-located/ campus-level and modular generation is now attracting so much attention. SMRs in particular represent a shift towards reducing dependency on overstretched national grids and aligning more favourably with the distributed nature of modern data infrastructure.

SMRs invert the traditional energy model: instead of massive plants far from load centres, they bring generation closer to demand. This reduces the need for billion-pound grid upgrades, lowers transmission losses, and improves efficiency. Over time, SMRs could form the backbone of decentralised clean power systems.

Tech giants are leading the charge, with AWS, Microsoft, and Google partnering with nuclear providers to secure dedicated, long-term power for data centres. This marks a strategic pivot, with energy sourcing now part of tech’s competitive advantage and becoming a tangible differentiator for uptime and sustainability.

The debate is shifting from cost per kilowatt-hour to cost per uptime, with reliability and predictability the new premium. For AI and data-driven industries, energy security is quite simply business continuity.

This corporate momentum will accelerate market confidence and regulatory momentum for SMRs, but there are still hurdles to overcome. Licensing for new reactor designs and supply chain investment need to be fast-tracked. At the same time, there are lingering concerns over waste management and safety, despite the fact that advanced, passive SMR safety systems reduce operational risk.

Consequently, in the short term, there will be integration – with renewables and SMRs more likely forming part of a hybrid and bridge power mix including solar, wind, and storage.

OPPORTUNITIES IN THE UK MARKET

The future of data centre energy is to be diversified, decentralised, and decarbonised. SMRs, combined with renewables and optimisation, could redefine how digital infrastructure is powered. Energy strategy is now climate strategy – and both are becoming competitive differentiators.

In the UK, there’s a clear opportunity emerging, with energy, industrial, and tech policy now being aligned for the first time in decades. With ‘AI Zones’ integrating power and data infrastructure, next-generation energy strategy (including nuclear) could make the UK a leader in pairing clean, reliable power with digital growth, provided grid reform keeps pace.

Apart from being a major opportunity for British engineering and export leadership in clean tech, such a far-reaching transition could also secure our long-term energy independence.

Prism Power, prismpower.co.uk

ENHANCING DATA CENTRE ENERGY STORAGE THROUGH INNOVATIVE BATTERY CHEMISTRIES

David Keating, Sales and Marketing Director at Echion Technologies, explains why advanced energy storage is becoming increasingly central to maintaining resilience, efficiency, and power stability in data centres.

AI has already become an integral part of everyday life: driving efficiencies, simplifying processes, and acting as a major catalyst for the ongoing technological revolution. Its transformative power is being utilised by businesses, innovators, and the public alike. But as its usage increases at pace, so does the amount of electricity needed to fuel it, placing incredible amounts of strain on national grids and drawing into focus the need for new solutions to enhance the resilience and sustainability of operations.

The data centres powering AI technologies are already consuming considerable amounts of power and water across the world, and this is set to rise significantly over the next five years. The International Energy Agency predicts that, by 2030, data centre electricity consumption

will reach around 945 terawatt-hours; this equates to more than Japan’s total electricity consumption today.

AI has the potential to drive significant economic growth, facilitate innovation, and, ultimately, enhance the lives of people across the world. However, its continued growth and scalability will rely upon the industry’s willingness to explore innovative energy storage solutions.

THE SHIFTING REQUIREMENTS OF ENERGY STORAGE

AI data centres operate at extremely high power densities and experience significant fluctuations in load requirements. Effective energy storage units can act as a buffer to circumvent power destabilisation and keep facilities in operation without the need for carbon-intensive diesel

generators. Energy storage also allows facilities located in areas with limit grid capacity to bank energy when grid intensity is low, as well as utilise on-site solar and/or wind energy production to ensure smooth operation, grid stability, and reduced emissions. That being said, for energy storage to play its pivotal role in the continued success of AI technology, innovation must be embraced at a granular level.

To meet high power densities and to weather frequent load spikes, battery systems must be able to discharge and recharge at high speed without incurring heating issues, losing significant efficiency, or limiting lifecycle. The way modern data centres operate places intense pressure on energy storage units, and high-frequency cycling can dramatically shorten battery life. This, then, incurs an increase in cost and emissions as units have to be replaced more often.

Safety is also a major concern. The unique power requirements being placed by facilities on outdated, off-the-shelf battery solutions can lead to an increase in heat generation and, in worse cases, critical thermal runaway. To counteract this, oversized cooling systems are installed which results in an even greater cost to operators, higher emissions, and heightened pressure on the local water supply.

WHICH BATTERIES ARE BEST?

Most data centres utilising energy storage operate lithium-ion or lead-acid battery systems. Standard lithium-ion batteries – like the batteries installed in most passenger EVs – are generally the preferred choice, but they degrade quickly when dealing with frequent power fluctuations. These batteries can lose as much as 3% of capacity annually while under moderate operating conditions and, crucially, are subject to potential thermal runaway, which can cause deadly fires.

Lead-acid batteries are commonly used by warehouse vehicles such as forklift trucks, but in the setting of an AI data centre, their performance does not meet the requirements. They are at risk of failure while operating at just 30% capacity, and their large size, weight, and long recharge times make them an unviable solution.

To address these shortcomings, innovative battery chemistries must be explored. Batteries are formed of two electrodes: the cathode and the anode. The anode is typically the limiting factor in battery performance, as it controls the rate at which the cells can safely charge and discharge. By innovating at the anode level, significant improvements can be made in battery performance.

One such material is our niobium-based XNO anode material. It has been specifically engineered for high power, long life, and safety. It enables lithium-ion batteries equipped with XNO anode materials to accept high currents safely and repeatedly, allowing instantaneous peak-shaving and near-immediate response to load fluctuations. Furthermore, an XNO-based uninterruptible power supply (UPS) can be much more compact than current ESS solutions, at roughly a tenth of the size of an equivalent lead-acid system and a third of a lithium-ion one, freeing up critical space within a data centre.

XNO also increases cycle life significantly. Units equipped with XNO maintain performance at high power for over 10,000 cycles, meaning units have to be replaced less often, which, in turn, leads to improvements economically and environmentally. Finally, the material maintains structural integrity and avoids dendrite formation, even while under very high charge rates.

Every 1% improvement in energy efficiency is enormous, with roughly 1.72 terawatts of electricity saved annually – equating to about $146 million (£108 million) in energy saved (based on $85 (£63) per megawatt hour). Therefore, raising efficiency by even just a few

percent through better chemistry translates into hundreds of millions in savings.

As governments and businesses around the world strive to win the AI race, computing power and innovations naturally grab the headlines. However, maintaining power stability and sustainability will be crucial in determining who emerges as the ultimate winners in the future. Investing into innovative battery solutions may well be the catalyst in enabling this to happen.

Echion Technologies, echiontech.com

MITIGATING POWER SYSTEM STRESS

Arturo Di Filippi, Global Offering Director for Large Power at Vertiv, explores how modern data centres are managing the variable fluctuations caused by fast-changing AI workloads.

Growing AI workloads place pressure on data centre power systems through the speed and repetition of their demand, rather than capacity alone. This behaviour introduces a new class of operational stress.

While overall power demand may remain within installed capacity, the rate at which that demand changes places strain on electrical components across the power train, from the uninterruptible power supply (UPS) through to generators, transformers, grid connections, and batteries. The challenge lies in managing these dynamics without compromising reliability, power quality, or equipment lifespan.

THE EVOLVING ROLE OF THE UPS

UPS systems sit at a critical junction between volatile AI loads and more rigid upstream infrastructure. UPS systems were originally designed primarily to provide clean, stable

power to sensitive equipment, with batteries reserved to supply energy during mains failures. Under AI workloads, that model becomes less clear-cut.

Fast load transitions can trigger frequent, shallow battery discharges as the UPS reacts to each power step. While individually small, these micro-discharges accumulate. The result is increased battery cycling, reduced service life, and a gradual erosion of available backup capacity. This outcome conflicts with the core purpose of energy storage within the UPS.

One response is to prevent unnecessary battery engagement during normal operation. By managing transient energy internally using DC link capacitors and adjusted control strategies, the UPS can absorb rapid load changes without drawing on stored energy. This approach protects batteries from repeated cycling and preserves their availability for genuine outage conditions.

At the same time, there are operating scenarios where controlled battery participation becomes beneficial. When AI-driven load volatility reaches upstream infrastructure directly, stabilising the input power profile takes priority. This is particularly relevant for sites relying on on-site generation, where generators have defined limits on how quickly load can change.

INPUT POWER SMOOTHING TO PROTECT UPSTREAM SYSTEMS

Input Power Smoothing (IPS) uses the UPS and battery system as a short-term buffer between fluctuating loads and upstream sources. Rather than reflecting rapid output changes back to the grid or generator, the UPS maintains a steadier input demand. Batteries absorb or inject power to compensate for short-term differences between average and instantaneous load.

This smoothing effect reduces stress on generators, transformers, and switchgear. It also helps maintain compliance with grid operator requirements around ramp rates and harmonic distortion. The approach is configurable, allowing operators to define acceptable fluctuation ranges and time windows based on site conditions and battery capabilities.

Effective smoothing depends on careful management of battery state of charge (SOC).

Operating within a defined SOC window enables sufficient energy to be reserved for backup while allowing controlled charge and discharge during load transients. When SOC approaches minimum thresholds, smoothing activity reduces or stops, preserving resilience.

During generator-supported operation, SOC thresholds may adapt to maintain smoothing while still protecting the battery. This flexibility allows continuity of operation during transitions from grid to backup power, when load stability is particularly important.

THE IMPORTANCE OF BATTERY PERFORMANCE AND SIZING

The effectiveness of any power stabilisation strategy depends heavily on battery characteristics. AI workloads favour batteries capable of high charge and discharge rates. Where recharge rates lag behind discharge, smoothing can only be sustained for limited periods before SOC declines.

High C-rate battery systems allow the UPS to respond rapidly to frequent load oscillations, maintaining equilibrium between charging and discharging. This capability is especially important in environments supported by on-site generation, where stable input power is critical to generator operation.

Battery capacity also plays a role. Simulations show that limited energy storage constrains the duration and extent of smoothing. As capacity increases, the system can maintain stable input power for longer periods under the same load profile. This relationship introduces a new sizing consideration for AI data centres beyond traditional autonomy calculations.

RETHINKING UPS AND POWER TRAIN SIZING

AI workloads challenge conventional approaches to power train design. Historically, UPS and battery sizing focused on outage duration and redundancy. With AI-driven volatility, sizing must also account for dynamic behaviour.

An effective approach starts with characterising the load profile. Understanding the amplitude, frequency, and duty cycle of power transitions provides a foundation for system design. From there, operators can define smoothing objectives based on upstream constraints and operational priorities. Battery systems are then selected to sustain the required level of smoothing without driving SOC below defined thresholds. Recharge equilibrium becomes a key metric, helping to manage the balance of discharge during load peaks by recharge during lower-demand

periods. Without this balance, smoothing performance degrades over time.

Thermal considerations also come into play. Repeated overload conditions increase temperatures within power electronics, affecting efficiency and longevity. Correct sizing helps maintain thermal stability and reduces cumulative stress on components.

PREPARING FOR LONG-TERM AI GROWTH

As AI deployment accelerates, load volatility is likely to become a permanent feature of data centre operation. Addressing it requires a shift in how power systems are specified, controlled, and evaluated.

Design decisions increasingly depend on understanding real workload behaviour rather than relying on static assumptions. Control strategies, battery performance, and system sizing all play interconnected roles in maintaining reliability.

For data centre operators, the task is to align power infrastructure with the realities of AI computing. Doing so helps protect upstream systems, extend equipment life, and maintain reliability in environments defined by speed, repetition, and constant change.

Vertiv, vertiv.com

HOW UK DATA CENTRES ARE TACKLING UNPRECEDENTED

ELECTRICAL

DEMANDS

With the world’s biggest companies announcing major AI data centre expansions across the UK, Janitza’s David Gilligan (VP Critical Power Solutions & Technology) and Roshan Rajeev (VP of Engineering) provide some answers to the question: can Britain’s electrical infrastructure cope?

The country faces a near-20% surge in data centre capacity, with over 100 new projects in the pipeline to build upon its existing sites, which already total more than 450. This explosive growth, fuelled by soaring AI computing demands, is spawning clusters around London and the Thames corridor, as well as emerging hubs in Manchester, Leeds, Wales, and Scotland.

The answer reveals uncomfortable truths about the collision between AI ambitions and power grid reality. Across the UK, data centre operators are discovering that artificial intelligence workloads don’t just consume more electricity; they consume it differently. This creates power quality challenges that threaten equipment reliability, grid stability, and the viability of aggressive AI deployment timelines.

A NEW ELECTRICAL LANDSCAPE

Traditional data centres are predictable: server racks hum along at steady loads, cooling systems maintain consistent draw, and power consumption follows recognisable daily patterns. AI has shattered that predictability.

The electrical behaviour we’re seeing with AI workloads is fundamentally different. Model training creates sustained loads in the megawatt range, constituting massive base loads that stress utility infrastructure. However, inference operations are even more challenging from a power quality perspective.

Inference, the process of running AI models to generate outputs, creates what engineers call “burst activity”. A facility might spike

from baseline to peak consumption in seconds as thousands of users simultaneously query large language models (LLMs). These high-power, short-duration surges (or ‘overvoltages’), sometimes increasing by 100 kilowatts within 10 seconds, generate voltage sags, transients, and flicker that propagate through electrical systems.

The challenge extends beyond raw capacity to the dynamic nature of AI loads. Traditional data centres might vary by 10 or 15% throughout the day, following predictable patterns as businesses open and close. By contrast, AI facilities can experience 50% load swings within minutes, creating extraordinary difficulties for distribution networks designed around more stable consumption profiles.

Live power quality data is non-negotiable as AI workloads are constantly evolving. You need visibility into voltage harmonics, current distortion, and power factor variations, and you need to access this data remotely so you can respond immediately. This visibility relies on modern, standards-compliant monitoring and analysis hardware, an area in which Janitza, for example, has established its global reputation.

The hardware compounds the problem. Graphics processing units (GPUs) and tensor processing units powering AI computation draw massive amounts of current in highly non-linear patterns. This creates harmonic distortion, electrical ‘noise’ that can damage transformers, interfere with sensitive equipment, and reduce system efficiency. In colocation environments hosting multiple tenants, aggregate harmonic distortion at the point of common coupling can exceed safe thresholds, affecting all customers.

BRITAIN’S INFRASTRUCTURE REALITY

These technical challenges arrive as UK data centres face significant grid capacity constraints. According to industry analysts, grid connection queues have stretched to unprecedented lengths, with some projects facing waits of five years or more. The National Energy System Operator (NESO) for the UK has warned that data centre electricity demand could increase sixfold by 2030, requiring network reinforcement.

For AI facilities, the situation is acute. A single, large-scale AI data centre can require 100 MW or more, roughly the output of a small gas turbine plant. Distribution network operators, already managing connections for renewable energy projects, struggle to accommodate these enormous, unpredictable loads without risking grid stability.

The power quality challenges we’re seeing in AI data centres represent a step change from traditional facilities. Operators must monitor electrical parameters that previously weren’t critical: transients as brief as 18 microseconds, voltage harmonics up to the 127th order, and rapid load fluctuations that can destabilise distribution networks.

The regulatory environment adds complexity. UK planning frameworks, designed for conventional development, struggle to accommodate the speed of AI deployment. Energy efficiency requirements under Building Regulations Part L and ESOS compliance create additional hurdles for operators trying to balance performance with sustainability commitments.

SOLUTIONS TAKING SHAPE

Forward-thinking operators are implementing sophisticated approaches to manage these challenges. Real-time power quality monitoring has evolved from optional to essential, with comprehensive systems capturing electrical parameters at millisecond intervals to detect problems before they cascade into failures.

High availability in data centres demands continuous monitoring according to standards like IEC 61000-2-4 and IEEE 519. The challenge with AI workloads is that traditional monitoring can’t capture the speed and complexity of electrical events. You need devices that can record fast transients whilst simultaneously tracking harmonic distortion across dozens of measurement points.

Battery energy storage systems (BESS) have emerged as a critical tool for load smoothing in data centres. By absorbing demand spikes and releasing power during lulls, BESS installations help flatten the load profile presented to the grid, reducing stress on upstream infrastructure. Several UK facilities have already deployed multi-megawatt battery systems specifically to manage AI workload variability.

On-site generation is gaining traction too. Combining grid power with natural gas generators or hydrogen-ready systems provides redundancy while reducing grid

dependence during peak AI operations. Some operators are now also exploring hybrid models that integrate renewable generation with storage and conventional backup power.

LOOKING AHEAD

As the UK Government pursues AI infrastructure investment as part of broader digital economy ambitions, the power quality challenge will intensify. Projections suggest AI computing capacity must increase tenfold by 2028 to meet anticipated demand, requiring additional grid connections and advanced monitoring.

The sector’s response will determine whether the UK can realise its AI ambitions or lose ground to regions with more robust electrical infrastructure. Success requires collaboration across the ecosystem: investment in monitoring and mitigation technology from operators, network upgrades from utilities, flexible regulatory frameworks, and new talent pipelines.

The AI revolution promises transformation on every level, but it must first overcome the challenge of maintaining stable, efficient, and resilient energy systems in the data centres that power the innovation.

Janitza, janitza.com

THE SHAPE OF ENERGY TO COME: AVK CHARTS THE COURSE FOR DATA CENTRE POWER INFRASTRUCTURE

DCNN was delighted to be invited to attend AVK’s end-of-year event, ‘The Shape of Energy to Come’, hosted on 3 December at Pavilion City in London’s financial district. The event covered a myriad of topics, and you can read all about it here in our review.

Looking out over the dramatic London skyline from the Cannon Green building, just metres from Cannon Street station, AVK assembled industry stakeholders for an afternoon that proved as ambitious in scope as it was strategic in timing.

The session moved swiftly through five critical pillars of data centre power evolution. Opening discussions involving AVK’s CEO, Ben Pritchard (pictured above, right), centred on the rise of microgrids, which are autonomous energy ecosystems designed to decouple facilities from grid dependency whilst accelerating decarbonisation timelines.

This segued naturally into AVK’s headline announcement: a new multi-year capacity framework with Rolls-Royce Power Systems, building upon their established System Integrator Agreement. The arrangement comprises both a five-year capacity partnership and a six-year exclusive

System Integrator designation for mtu generator sets across the UK and Ireland. It is aimed at addressing one of the sector’s most pressing constraints, namely guaranteed access to critical power equipment during unprecedented demand.

THE FUTURE OF SUSTAINABLE OPERATIONS

Subsequent presentations explored the practical integration of renewables into 24/7 operations, with AVK and Wärtsilä demonstrating flexible, dispatchable power solutions that bridge intermittency challenges. The partnership on carbon capture technology – CO2 being captured, reused, and monetised – offered a compelling vision of emissions as a revenue stream rather than a liability. You can read the full findings of their research in the whitepaper they have jointly published.

The closing discussion on re-industrialising the UK power chain proved resonant, positioning AVK’s new modular manufacturing facility as both a technological necessity and an economic catalyst for skills development.

The overarching takeaway was certainly how AVK is orchestrating a coordinated industrial response to data centre power demands, moving decisively beyond equipment supply towards integrated energy solutions. As capacity crunches intensify across Europe, this strategic positioning appears remarkably prescient.

AVK, avk-seg.com

Is your Data centre ready for the growth of AI?

Partner with Schneider Electric for your AI-Ready Data Centres. Our solution covers grid to chip and chip to chiller infrastructure, monitoring and management software, and services for optimization.

Explore our end-to-end physical and digital AI-ready infrastructure scaled to your needs.

se.com/datacentres

Turn static files into dynamic content formats.

Create a flipbook