DCNN Winter 2025

Page 1


Liquid Cooling AI

BENDBRIGHTXS 160 µm: A MAJOR ADVANCE IN NETWORK MINIATURISATION

The world’s first 160 µm bend-insensitive fibre - higher capacity, smaller footprint

Unlock record cable density for today’s space-constrained ducts, buildings, and data centres with the BendBright XS 160 µm single-mode.

By shrinking the coating while maintaining a 125 µm glass diameter and full G.652/G.657.A2 compliance, it delivers faster, more cost-effective deployments that don’t disrupt established field practices.

Enables slimmer, lighter, high-count cables that install faster and travel further when blown

Packs dramatically more fibres into the same pathway thanks to a >50% cross-sectional area reduction versus 250 µm fibres

Ensures full backward-compatibility with legacy single-mode fibres for seamless splicing and upgrades

Provides superior bend performance and mechanical reliability with Prysmian’s ColorLockXS coating system

Ideal for FTTH/X access, metro densification, and hyperscale /data centre interconnects where every millimetre counts

Ready to boost capacity and shrink your network footprint?

Connect with our experts, request samples, or download the BendBright XS 160 µm technical data today.

WELCOME TO THE WINTER ISSUE OF DCNN!

Welcome to this latest edition of DCNN! As we settle into the long winter nights and begin to unwind for the festive period, it’s clear that the pace of development in the data centre world is doing anything but. Whilst thoughts shift to relaxation and time with loved ones, we simultaneously find ourselves navigating new challenges on a global scale.

For instance, in Europe, we see the stark impact of how heavily the continent depends on foreign – particularly American – hyperscalers. Recent outages across global cloud infrastructure highlight this best. When services hosted on AWS or safeguarded by Cloudflare collapse, the disruptions are immediate to European public services, factories, and more. We can observe how, in response to this

CONTACT US

GROUP EDITOR: SIMON ROWLEY

T: 01634 673163

E: simon@allthingsmedialtd.com

ASSISTANT EDITOR: JOE PECK

T: 01634 673163

E: joe@allthingsmedialtd.com

ADVERTISEMENT MANAGER: NEIL COSHAN

T: 01634 673163

E: neil@allthingsmedialtd.com

SALES DIRECTOR: KELLY BYNE

T: 01634 673163

E: kelly@allthingsmedialtd.com

increasingly uneasy reality characterised by a distinct lack of data sovereignty, European nations are finally beginning to take some action, but how swift and how effective this will eventually be is still to be seen.

As we look to the new year, it’s essential for all stakeholders to remain agile and informed amidst seemingly incessant global changes, including from the many influences beyond just the tech space. More generally, as is clear from the features in this issue, we can at least rest assured that the coming year promises much transformation.

I hope you enjoy the edition!

Joe

STUDIO: MARK WELLER

T: 01634 673163

E: mark@allthingsmedialtd.com

MANAGING DIRECTOR: IAN KITCHENER

T: 01634 673163

E: ian@allthingsmedialtd.com

CEO: DAVID KITCHENER

T: 01634 673163

E: david@allthingsmedialtd.com

Steven Carlini of Schneider Electric outlines how new collaborations with NVIDIA and ecosystem partners are accelerating AI-ready power, cooling, and digital-twin capabilities

DCNN speaks with Jens Holzhammer of Panduit about the company’s approach to ongoing supply pressures, rising power densities, and more

Key Issue

Mike Hellers of LINX and Mike Hoy of Pulsant explore how Scotland’s reliance on distant internet hubs undermines performance, security, and sustainability

Joe from DCNN reports back from Schneider Electric’s recently held Innovation Summit in Copenhagen

24 Chris Cutler of Riello explores how using SiC semiconductors could help the next generation of modular UPS systems ensure operators can balance reliability with sustainability

28 Karl Bateson of Centiel reflects on the pressures created by rapid, AI-driven development and how UPS strategies are evolving to keep pace

32 Ricardo de Azevedo of ON.energy explains why traditional UPS designs are falling short as AI campuses overwhelm utility systems

36 Domagoj Talapko of ABB details how new UPS strategies are helping meet growing power and reliability requirements

42 Dennis Mattoon of TCG explores how trusted computing standards aim to protect facilities against increasingly sophisticated hardware and firmware attacks

46 Cory Hawkvelt of NexGen Cloud outlines how sovereignty, regulation, and hardware-level security are reshaping the safeguards required for mission-critical AI workloads

50 Brian Higgins of Group 77 examines how AI, cyber-physical convergence, and rising environmental pressures are reshaping safety and security across modern data centres

CABLING ENERGY MANAGEMENT

56 Michael Akinla of Panduit outlines how standards-aligned fibre design helps AI SuperPods manage rising MPO density and prepare for rapid transitions in ethernet speeds

70 Amanda Springmann of Prysmian explains how reduced-diameter single-mode fibre and high-density cable designs are helping operators keep pace with rapidly escalating optical demands

76 Carson Joye of AFL examines the ultra-high-density fibre innovations reshaping hyperscale and long-haul networks

SPECIAL FEATURES

21 Advertorial

Baudouin explores how a century of engineering expertise translates into gensets built for the uncompromising demands of modern data centres

54 Edge Computing

Niklas Lindqvist of Onnec outlines the design principles that ensure edge facilities can meet rising AI-driven performance and efficiency demands

60 Carsten Ludwig of R&M suggests how IMDCs can deliver the energy efficiency needed to support high-density AI workloads

63 Arturo Di Filippi of Vertiv explains how modular, grid-interactive power systems are redefining efficiency and resilience in an era of constrained energy supply

66 Stephen Yates of IT Cleaning outlines the relationship between cleanliness, airflow dynamics, and energy efficiency in data centres

74 Bonus Feature Ian Lovell of Reconext analyses how shifting risk perceptions, tighter power constraints, and proven recovery methods are reshaping end-of-life policy inside modern data centres

WINTER 2025

Show Previews: DCW London

EQUINIX ANNOUNCES £3.9 BILLION UK DATA CENTRE

Equinix has acquired an 85-acre site in Hertfordshire for a planned data centre campus, announcing a £3.9 billion investment expected to deliver more than 250 MW of capacity. The company says the development will support organisations across sectors such as healthcare, public services, finance, manufacturing, and entertainment, and forms part of wider ambitions around UK sovereign AI capability.

The site, formerly known as DC01UK, is forecast to create around 2,500 construction jobs and more than 200 permanent roles. KPMG estimates the project could generate up to £3 billion in annual Gross Value Added during construction, and £260 million once operational.

Equinix says it will work with local partners on education, skills, and environmental initiatives, and plans to retain over half the land as open space, introduce ecological habitats, and use dry cooling to support at least a 10% biodiversity net gain. The campus will also be designed to enable future heat reuse.

Equinix, equinix.com

ATNORTH’S DEN01 TO SUPPLY DISTRICT HEATING IN COPENHAGEN

atNorth, an operator of sustainable data centres, has agreed a partnership with Vestforbrænding, Denmark’s largest waste-to-energy company, to supply excess heat from its forthcoming DEN01 data centre campus into the district heating network serving Greater Copenhagen.

DEN01, a 22.5MW site in Ballerup, is scheduled to open in early 2026. Through the collaboration, warm water generated as a by-product of direct liquid cooling will be transferred into Vestforbrænding’s network from 2028. The recovered heat is expected to support the heating of more than 8,000 homes, reducing energy consumption for local central heating and lowering emissions for both organisations.

Steen Neuchs Vedel (pictured right), CEO of Vestforbrænding, says, “For many years, we have talked about surplus heat from data centres being part of the future. Now, the future is here.”

Eyjólfur Magnús Kristinsson (pictured left), CEO at atNorth, adds, “As the demand for AI-ready digital infrastructure continues to increase, it is imperative that data centre companies scale in a responsible way.”

atNorth, atnorth.com

AVK, a supplier of data centre power systems in Europe, and Rolls-Royce Power Systems have announced a new multi-year capacity framework as an addition to their established System Integrator Agreement.

The new partnership cements closer collaboration between the two companies, with a focus on increasing industrial capacity for genset orders whilst

accelerating joint innovation across the data centre and critical power markets. It comes 12 months after AVK announced a record-breaking year of sales, with 2024 seeing AVK deliver its 500th mtu generator from Rolls-Royce.

Under the new Memorandum of Understanding, their relationship moves on to become a strategically integrated, longer-term alliance. The framework formalises a five-year capacity partnership, with Rolls-Royce increasing supply and AVK committing to order volume.

A parallel six-year master framework designates AVK as the exclusive System Integrator for mtu generator sets across the UK and Ireland until 2031.

AVK, avk-seg.com

LINX ACHIEVES 25 X 400GE PORT COUNT MILESTONE

The London Internet Exchange (LINX) has successfully deployed its 25th 400GE port across its global network, a major milestone within the wider industry’s growth.

The first 400GE port was provisioned in 2021, with demand for ultra-high bandwidth connectivity surging since then, driven by the exponential growth in cloud services, video streaming, gaming, and AI workloads. A single 400GE connection delivers four times the capacity of a standard 100GE port, enabling networks to consolidate traffic, reduce operational complexity, and build in resilience.

LINX has invested heavily in Nokia’s next-generation hardware and optical technologies to enable 400GE delivery across its interconnection ecosystems in London and Manchester.

Jennifer Holmes, CEO of LINX, comments, “Reaching 25 active 400GE ports is a testament to the evolving needs of our members and the strength of our technical infrastructure.”

LINX, linx.net

AECOM APPOINTED TO DELIVER DATA CENTRE IN SPAIN

AECOM has been selected by Nostrum Data Centers to lead the design and construction management of a new data centre in Badajoz, Spain. With an investment exceeding €1.9 billion (£1.6 billion), ‘Nostrum Evergreen’ is one of Spain’s most ambitious digital infrastructure projects, with capacity expected to reach 500 megawatts.

The first phase includes the design and construction of data halls and critical operational infrastructure, with an initial capacity of 150 megawatts of electric capacity (MWe). The second phase, scheduled to begin in early 2029, will allow the site to reach 300 MWe. The complex’s design will enable scalability up to 500 MWe.

“This data centre in Badajoz will have a total capacity similar to the combined capacity of all current operational data centres in Spain,” says Gabriel Nebreda (pictured left),, CEO of Nostrum Group.

The project expects to obtain its building permit by mid-2026 and has already secured electrical capacity and more than 200,000 m² of ready-to-build industrial land.

aecom.com

OXFORD INSTRUMENTS TECHNOLOGY SUPPLIED TO QUANTUM-AI DATA CENTRE

Oxford Instruments NanoScience, a UK provider of cryogenic systems for quantum computing and materials research, has supplied one of its advanced Cryofree dilution refrigerators, the ProteoxLX, to Oxford Quantum Circuits’ (OQC) newly launched quantum-AI data centre in New York.

As the first facility designed to co-locate quantum computing and classical AI infrastructure at scale, the centre will use the ProteoxLX’s cryogenic capabilities to support OQC’s next-generation quantum processors, helping to advance the development of quantum-enabled AI applications.

The announcement follows the opening of OQC’s New York-based quantum-AI data centre, powered by NVIDIA CPU and GPU Superchips. Within OQC’s logical-era quantum computer, OQC GENESIS, the ProteoxLX provides the ultra-low temperature environment needed to operate its 16 logical qubits, enabling over 1,000 quantum operations.

Matthew Martin, Managing Director at Oxford Instruments NanoScience, says, “We’re proud to support OQC in building the infrastructure that will define the next generation of computing.”

Oxford Instruments NanoScience, nanoscience.oxinst.com

AECOM,

FROM GRID TO CHIP: HOW SCHNEIDER ELECTRIC IS ENABLING AI’S FUTURE WITH NVIDIA

Steven Carlini, Chief Advocate, Data Center and AI at Schneider Electric, outlines how new collaborations with NVIDIA and ecosystem partners are accelerating AI-ready power, cooling, and digital-twin capabilities for gigawatt-scale deployments.

The compute requirements for AI reasoning and other inference workloads is outpacing traditional data centre designs, creating an urgent need for high-density power and advanced cooling. Schneider Electric is collaborating on the newly announced NVIDIA Omniverse DSX Blueprint for multi-generation, gigawatt-scale build outs, using NVIDIA Omniverse libraries and SDKs that will set a new standard of excellence for AI infrastructure.

As the world’s leading solution provider for power distribution and liquid cooling solutions for data centres and AI factories, Schneider Electric is working with NVIDIA and other ecosystem partners at the AI Factory Research Center in Virginia to form a pathway to develop next-generation, AI-ready infrastructure faster and with greater efficiency and performance.

These are the most recent highlights of Schneider Electric’s innovation and capability commitments to the partnership:

Earlier this year, Schneider Electric shared plans to invest more than $700 million (£525 million) in its US operations between now and 2027. These planned investments are intended to support national efforts to strengthen energy infrastructure in response to growing demand across data centres, utilities, manufacturing, and energy sectors –particularly as AI adoption accelerates. Building on investments made in 2023 and 2024 to reinforce its North American supply chain, the initiative includes potential manufacturing expansions and workforce development. These efforts reflect strong customer demand for solutions that improve energy efficiency, scale industrial automation, and enhance grid reliability.

Together, these plans are designed to enable innovation, especially through new reference designs developed in collaboration with NVIDIA and the integration of Schneider Electric’s digital twin ecosystem.

REFERENCE DESIGNS

In September, Schneider Electric announced new reference designs developed with NVIDIA that significantly accelerate time to deployment and aid operators as they adopt AI-ready infrastructure solutions.

The first reference design delivers one of the industry’s first and only critical framework for integrated power management and liquid cooling control systems – including Motivair by Schneider Electric liquid cooling technologies – and enables seamless management of complex AI infrastructure components. It includes interoperability with NVIDIA Mission Control – NVIDIA AI factory operations and orchestration software, including cluster and workload management features. The control systems reference design can also be utilised with Schneider Electric’s data centre reference designs for NVIDIA Grace Blackwell systems, enabling operators to keep pace with the latest advancements in accelerated computing with seamless control of their power and liquid cooling systems.

The second reference design focuses on the deployment of AI infrastructure for AI factories of up to 142 kW per rack, specifically the NVIDIA GB300 NVL72 racks, in a single data hall. Created to provide a framework for the nextgeneration NVIDIA Blackwell Ultra architecture, the reference design includes information on four technical areas: facility power, facility cooling, IT space, and lifecycle software. The design is available under configurations for both the American National Standards Institute (ANSI) and the International Electrotechnical Commission (IEC) standards.

DIGITAL TWINS

Leveraging the NVIDIA Omniverse Blueprint for AI factory digital twins, Schneider Electric and ETAP enable the development of digital twins that bring together multiple inputs for mechanical, thermal, networking, and electrical systems to simulate how an AI factory operates. The collaboration is set to transform

AI factory design and operations by providing enhanced insight and control over the electrical systems and power requirements, presenting an opportunity for significant efficiency and reliability.

Schneider Electric has built on this virtual modelling capability by also enabling enterprises of the future to optimise the electrical infrastructure that supports top-tier accelerated compute environments.

Collaboration between ETAP and NVIDIA introduces an innovative ‘Grid to Chip’ approach that addresses the critical challenges of power management, performance optimisation, and energy efficiency in the era of AI. Currently, data centre operators can estimate average power consumption at the rack level, but ETAP’s new digital twin aims to increase precision on modelling dynamic load behaviour at the chip level to improve power system design and optimise energy efficiency.

This collaborative effort highlights the commitment of both ETAP and NVIDIA to driving innovation in the data centre sector, empowering businesses to optimise their operations and effectively manage the challenges associated with AI workloads. The collaboration aims to enhance data centre efficiency while also improving grid reliability and performance.

AI INFRASTRUCTURE DOESN’T STOP HERE

These innovations underscore Schneider Electric’s commitment to unlocking the future of AI by pairing its data centre expertise with NVIDIA-accelerated platforms. Together, we’re helping customers overcome infrastructure limits and scale efficiently and at speed. The progression of advanced, future-forward AI infrastructure doesn’t stop here.

Schneider Electric, se.com

Integrated

Modular Data Centres: Strategic solutions for pressing issues!

DCs are facing growing challenges like rising power demands, labour shortages, rapid growth of AI workloads... Traditional approaches are often too slow, costly, and unsustainable where speed, efficiency, and scalability are required.

R&M addresses this with modular, ready-to-use solutions. These support key areas including servers and storage, computing rooms, meet-me rooms, and interconnects.

Scan to contact us.

Reichle

Keep IT cool in the era of AI

EcoStruxure IT Design CFD by Schneider Electric helps you design efficient, optimally-cooled data centers

Optimising cooling and energy consumption requires an understanding of airflow patterns in the data center whitespace, which can only be predicted by and visualised with the science of computational fluid dynamics (CFD).

Now, for the first time, the technology Schneider Electric uses to design robust and efficient data centers is available to everyone.

• Physics-based predictive analyses

• Follows ASHRAE guidelines for data center modeling

• Designed for any skill level, from salespeople to consulting engineers

• Browser-based technology, with no special hardware requirements

• High-accuracy analyses delivered in seconds or minutes, not hours

• Supported by 20+ years of research and dozens of patents and technical publications

Equipment Models –

Easily choose from a range of data center equipment models from racks, to coolers, to floor tiles.

CFD Analysis –

The fastest full-physics solver in the industry delivering results in seconds or minutes, not hours.

Cooling Check –

Visualisation Planes and Streamlines –Visualise airflow patterns, temperatures and more.

Reference Designs –

Cooling Analysis Report –

Generate a comprehensive report of your data center with one click.

At-a-glance performance of all IT racks and coolers.

Quickly start your design from pre-built templates.

IT Airflow Effectiveness and Cooler Airflow Efficiency –Industry-leading metrics guide you to optimise airflow.

Room and Equipment

Attributes – Intuitive settings for key room and equipment properties.

NAVIGATING DEMAND, DENSITY, AND LOCALISATION IN A SHIFTING DATA CENTRE LANDSCAPE

Earlier this month, Joe from DCNN sat down with Jens Holzhammer, SVP and Managing Director EMEA at Panduit, within the stunning setting of the Schlosshotel Kronberg, on the outskirts of Frankfurt, to find out about his company’s approach to ongoing supply pressures, rising power densities, and more.

Joe: Thank you for speaking with us today, Jens. To start, could you tell us about your role at Panduit?

Jens: I joined the organisation pretty much a year ago now. The role essentially encompasses the whole commercial organisation for the EMEA theatre, and that starts with sales and

ends with the operational side of things. Being an American company, we are matrixed, so there are heavy links into manufacturing as well as logistics and the supply chain. My role covers the whole strategic ‘go-to-market’ side of things right down to the very day-to-day operational aspects of making customers happy, customer acquisition, and partner management.

The drinks reception at the Schlosshotel Kronberg, Frankfurt

Joe: What are some of the key challenges you’re facing at the moment?

Jens: There is a huge demand for data-centre-related products in the market. Demand currently far outweighs supply, and it’s not only us who are impacted by that, but also other vendors in the data centre market. Anything that goes into high-density fibre cable connectivity – including ducting for all the power and fibre cables, as well as copper cable – is in high demand. There is not enough supply in the market, and therefore everyone is struggling a little bit. It’s a nice challenge – or a luxury problem – to have.

Additionally, as many companies and countries are investing in electrical infrastructure, the energy transition, and data centres and AI, it is important to strike a balance – given the available resources – and focus on the right markets at the right time.

Joe: What are the biggest trends and developments you’re seeing at Panduit currently?

Jens: The coming together of AI, increasing power consumption, and the energy transition – this all forms an area where the various divisions within our business are overlapping and interconnected. You have the white space and the grey space in the data centre, and in the grey space you have a lot of infrastructure installation applications where our industrial electrical business is coming in nicely. Thanks to our broad portfolio of solutions, we can offer system integrators and end customers alike better combinations for efficient and effective electrical and network infrastructure.

Another trend, given all the ongoing geopolitical and macroeconomic developments globally, including with tariffs, is that we see a strong need to further localise the supply chain. We produce in Romania, for example, and we are going to increase the number of products manufactured there. Rather than sourcing everything from America, we are relocating capabilities into EMEA.

More generally, we have identified strong pockets of growth in geographical areas which we want to concentrate more on; the Nordics are certainly one of these because a lot of data centre activities are now moving to the north from the more traditional spaces. Then there are other regions, such as Saudi Arabia, where we are also currently investing heavily.

The entrance to the event
Jens greeting the invitees

Joe: What are some of the most pressing demands from your data centre customers?

Jens: Definitely the on-demand availability of products and, therefore for us, the localisation of the supply chain. Let me give you an example: Right now, there is very high demand for fibre optic cable guides, which we normally produce in our North American plants. We now see opportunities to manufacture such important components for AI data centres in Europe as well.

Joe: How have infrastructure requirements evolved, especially with the rise of AI workloads requiring higher power densities?

Jens: We are now talking about 100 kilowatts plus of power per rack. There’s even talk of some early developments of one-megawatt racks, which is crazy. With tonnes of GPUs going in there, you can imagine the power requirement and the heat dissipation. Cooling is certainly an area where the industry has an increasing challenge.

Not just this, but I was talking to a professor recently and they are currently looking into using data centres for renewable or semi-renewable heating purposes – taking the heat that is created within the data centre environment and using that as a source for remote heating of households. There’s a development in former East Germany, to give you one example, which is going to be a big one.

Joe: With Europe’s sustainability targets, how is Panduit helping reduce its overall environmental impact?

Jens: We are particularly challenged in Europe regarding regulations and laws. The company itself is quite involved in the sustainability space and we are EcoVadis Silver awarded, meaning we are in the top 15% of assessed companies globally for sustainability. Thanks to the commitment of our Executive Chairman, we consider this to be a holistic approach to environmental stewardship, ranging from sustainable manufacturing processes and energy-saving projects in our plants to the use of environmentally friendly materials in our products. We recently launched an ecofriendly cable tie that is made from 100% waste materials and meets our quality standard for nylon 6.6.

Joe: Looking ahead, how do you see the future of your market developing?

Jens: According to some of our market ecosystem partners – and we work with many of the large logistics and distribution companies – there is no other company like Panduit that offers such a wide and deep portfolio when it comes to data centre applications, commercial network infrastructures, or industrial electrical infrastructures. For the foreseeable future –the next five years or so – we see a strong, unprecedented, continuous demand on the data centre side and we are very geared up for it. There are more opportunities than challenges.

Panduit, panduit.com

Jens giving his ‘Building the Future’ presentation

SUBSCRIBE TO OUR MAGAZINE AND NEWSLETTER TODAY

Subscribe free today to receive your digital issue of Data Centre & Network News, along with our weekly newsletters featuring the latest breaking news along with exclusive offers from industry leaders.

SCAN ME TO SUBCRIBE

WHY LOCAL INTERNET TRAFFIC MATTERS

MORE THAN YOU THINK

Mike Hellers, Product Development Manager at LINX, and Mike Hoy, Chief Technology Officer at Pulsant, explore how Scotland’s reliance on distant internet hubs undermines performance, security, and sustainability.

Imagine sending a message to someone next door, only for it to be routed via London or Amsterdam first.

That’s what happens to much of Scotland’s internet traffic today. Data often leaves the country before reaching its destination, creating unnecessary delays, higher costs, and raising serious questions about privacy, resilience, and digital sovereignty.

Hosting and routing traffic within Scotland is the missing piece needed to unlock faster speeds, stronger security, and greater economic inclusion.

WHY LOCAL ROUTING MATTERS

Most Scottish internet traffic is still routed via major hubs in London, Manchester, or Continental Europe. This leads to slower performance for consumers and real business consequences for organisations, especially in rural regions where only half of homes currently have gigabit-capable broadband. Longer data journeys also consume more energy, enlarge carbon footprints, and increase vulnerability to faults or cyberattacks.

The infrastructure already exists. Data centres such as Pulsant’s, where LINX first operated in Scotland, prove that local routing is viable. However, wider adoption requires

collaboration between industry and government to promote regional peering and keep traffic in-country. This is crucial as AI adoption surges, creating a sharp rise in data storage, processing, and streaming demands.

PRIVACY AND REGULATION

Many organisations assume using a Scottish data centre guarantees data sovereignty; it doesn’t. Data may still cross borders during updates, backups, or when using cloud services. For highly regulated sectors like finance and healthcare, this lack of transparency poses legal and ethical risks.

These sectors need more than storage assurances; they need visibility and control over where data travels.

SCOTLAND’S OPPORTUNITY

By enabling clearer routing policies, public-private collaboration, and greater transparency over data flows, Scotland can boost performance, protect sensitive information, and support a greener, fairer digital economy. The infrastructure is here. Now it’s time to keep more data in Scotland, where it belongs.

LINX, linx.net | Pulsant, pulsant.com

UPS, POWER AND DISTRIBUTION

BAUDOUIN: POWERING THE DATA ECONOMY WITH

SPEED AND RELIABILITY

Baudouin explores how a century of engineering expertise translates into gensets built for the uncompromising demands of modern data centres.

As the global data centre landscape accelerates, power continuity has become the ultimate benchmark of operational excellence. With workloads increasing exponentially and uptime measured in milliseconds, the demand for high-performance, mission-critical backup power has never been greater.

For over a century, Baudouin has been engineering power solutions that keep essential infrastructure running without interruption. Today, the company is redefining backup generation for the digital age – delivering diesel gensets purpose-built for data centre applications where reliability, performance, and responsiveness are non-negotiable.

BACKUP POWER FOR MISSION-CRITICAL INFRASTRUCTURE

In a data centre, every second counts. Grid fluctuations or even brief power drops can lead to costly service disruption. Baudouin gensets are designed to guarantee seamless continuity between mains and generator supply, ensuring instant start-up and stable voltage and frequency under the most demanding load conditions.

Built for Tier III and Tier IV installations, Baudouin generator sets combine outstanding transient performance, high load acceptance,

and redundant start capability. Fully compliant with ISO 8528-5 G3 standards and pre-approved by the Uptime Institute, they deliver the responsiveness that security operators require to maintain uninterrupted operations.

ENGINEERING EXCELLENCE AT SCALE

Baudouin’s data centre range covers outputs from 2,000 to 5,250 kVA, offering class-leading power density that enables smaller footprints and simplified integration. Each unit is equipped with intelligent electronic fuel injection, advanced turbocharging, and high efficiency intercooling for optimal response and durability. A reinforced H-Beam chassis ensures structural rigidity, while optimised NVH architecture minimises vibration and noise. Every genset is also HVO-ready, supporting the use of renewable diesel fuels that reduce carbon emissions without compromising performance – aligning reliability with sustainability.

SPEED, DURABILITY, AND INTEGRATION

Beyond technical specifications, Baudouin differentiates itself through agility. Its production capability allows short delivery times – under six months for certain configurations –helping operators overcome supply-chain bottlenecks and grid-connection delays.

Each system is engineered for flexibility, capable of customisation to meet specific site conditions or hybrid configurations, from single-unit deployments to multi-MW clusters. Combined with a worldwide network of service partners and technical support, Baudouin ensures reliability extends far beyond installation day.

BUILT TO LAST

As data becomes the world’s most critical resource, the systems that protect it must be uncompromising. Baudouin’s century of mechanical expertise, combined with continuous innovation in power generation, delivers one clear promise: reliability at the speed of data.

Baudouin — Built to Last

Baudouin, baudouin.com

POWERING THE FUTURE: HOW SILICON CARBIDE

IS REDEFINING UPS EFFICIENCY

Chris Cutler, Business Development Manager – Data Centres at Riello UPS, explores how using silicon carbide (SiC) semiconductors could help the next generation of modular UPS systems ensure data centre operators can balance reliability with sustainability.

The data centre landscape is undergoing unprecedented transformation, with the seemingly unstoppable growth of AI and hyperscale computing leading to rising demand and rack densities. High energy costs are compounding these pressures, along with the rapidly fluctuating and unpredictable load profiles associated with AI applications. These factors pose huge challenges to traditional UPS architectures as manufacturers need to balance the primary purpose of an uninterruptible power supply – namely to guarantee resilience and reliability –with the sector’s desire for ever more sustainable solutions.

LOOKING BACK… AND TO THE FUTURE

Historically, uninterruptible power supplies have been manufactured using Insulated Gate Bipolar Transistors (IGBTs), a well-established, proven, and cost-effective technology.

Over recent years, incremental changes in design – such as the evolution from two-level to three-level architecture inverters or adapting filter materials – have helped UPS manufacturers keep pushing efficiency ratings upwards.

But how can we answer the industry’s call for even higher efficiency? That’s where the potential of silicon carbide (SiC) semiconductors comes in.

EXPLORING THE BENEFITS OF SiC

Silicon carbide isn’t a new technology, of course, considering its widespread adoption in the electric vehicle industry. However, for UPS manufacturers, it does offer several inherent advantages compared to silicon-based IGBT:

• Higher efficiency and reduced switching losses — SiC components exhibit lower electrical resistance, resulting in reduced energy losses, which helps to maximise the overall efficiency of the UPS.

• Increased power density — The technology enables increased power density, making it possible to design more compact and lightweight UPS systems without compromising on overall power capacity.

• Increased thermal stability — SiC can operate at higher temperatures than IGBTs, translating to a broader operational range and reduced cooling demands. IGBTs require larger heat sinks as they dissipate more energy.

• Enhanced frequency response — SiC’s faster switching capabilities result in a more responsive UPS, crucial for handling the rapidly fluctuating load conditions typically found in modern data centres, particularly those dealing specifically with AI applications.

• Durability — The robustness of SiC and its ability to withstand high surge currents or voltage spikes reduces overall wear and tear, leading to extended component and UPS lifecycles, as well as reducing maintenance needs.

Now, we do need to acknowledge that silicon carbide components take 4-6 times the energy to manufacture. This inevitably results in increased production costs and the amount of CO2 generated. That being said, these issues are more than offset by the overall energy savings across the lifecycle of the UPS.

PUTTING THEORY INTO PRACTICE

Riello UPS embraced the untapped potential of SiC with our Multi Power2 range. These modular UPS systems come in two configurations: MP2 (300-500-600 kW versions) and Scalable M2S (1000-1250-1600 kW versions).

Both options are based on high-density 3U 67 kW power modules. Thanks to the use of silicon carbide components, these modules can achieve ultra-high efficiency of 98.1%. This significantly reduces a data centre’s energy consumption and running costs.

The positive characteristics of SiC also shine through in the UPS’s ability to handle rapidly fluctuating AI loads and its overall robustness.

For example, with a typical UPS, you’ll need to swap out the capacitors at service life year 5-7; that’s potentially two or three times over a 15-year lifespan. But with the durability of SiC, it is realistic to go through the entire lifespan without having to replace the capacitors at all, cutting maintenance and disposal costs.

Don’t forget the broader benefits of modular UPS too, namely ‘pay-as-you-grow’ scalability that allows a data centre to add extra modules or cabinets as and when its load requirements change, as well as hot-swappable modules that ensure zero-downtime maintenance.

PRACTICAL PROOF OF SAVINGS

A modular UPS made with SiC components will deliver data centres significant cost, efficiency, and carbon emissions savings compared to both legacy monolithic UPS and IGBT-based modular solutions.

Take Example 1, which replaces a monolithic 1 MW N+1 UPS (made up of 3 x 12 pulse 600 kVA 0.9pf UPS) with a 1,250 kW Multi Power2 Scalable M2S:

• Total annual energy cost savings = £95,759

• Overall total annual cost savings = £117,544

• Total annual CO2 savings = 148.7 tonnes

• Total 15-year cost savings = £2,353,655

• Total 15-year CO2 savings = 1702.8 tonnes

Or in Example 2, here are the savings provided by a 1,250 M2S at 1 MW load versus a UPS comprising 3 x 400 kVA modular units:

• Total annual energy cost savings = £51,099

• Overall total annual cost savings = £53,839

• Total annual CO2 savings = 79.4 tonnes

• Total 15-year cost savings = £1,078,068

• Total 15-year CO2 savings = 908.7 tonnes

If you add in aspects such as the lifelong components and not having to regularly swap out capacitors, as highlighted previously, a typical data centre could save anywhere from £80,000 to £120,000 in maintenance costs alone over the lifespan of a UPS manufactured using SiC.

SUMMING UP SILICON CARBIDE

Although SiC-based UPSs may come at a higher initial cost compared to traditional IGBT, the lower cooling requirements, reduced energy consumption, and longer operational life ultimately deliver a better total cost of ownership (TCO).

And while it is by no means a silver bullet, silicon carbide will likely become the go-to choice to help UPS manufacturers meet the needs of modern data centres’ ongoing demand for higher efficiency power infrastructure.

5250 kVA POWER BUILT TO

RELIABLE BACKUP POWER SOLUTIONS FOR DATA CENTRES

The 20M55 Generator Set boasts an industry-leading output of 5250 kVA, making it one of the highest-rated generator sets available worldwide. Engineered for optimal performance in demanding data centre environments.

PRE-APPROVED UPTIME INSTITUTE RATINGS

ISO 8523-5 LOAD ACCEPTANCE PERFORMANCE | BEST LEAD TIME

MEETING THE RISING DEMAND FOR POWER RESILIENCE

Karl

Bateson, Key Account Manager UK at Centiel, reflects on the pressures created by rapid, AI-driven development and how UPS strategies are evolving to keep pace.

AI is driving growth in the data centre market like we have never seen before. It reminds me of the early 90s during the telecom and internet boom, or even the Gold Rush of 1849. There is a race to capitalise on the opportunities AI offers.

Every week we are seeing news of another data centre development: There is one in Middlesbrough said to be taking over the former ICI works site, another at the site of an old power station, and a further development in Scotland, to name a few. Developers are already publicising these developments to attract speculative customers. In the US, they are even re-purposing former nuclear power stations and, because data centres need strong connectivity and power connections, these sites will work well. Serious players are

even building their own power stations to feed their AI data centres.

That all being said, the rise of AI has also created some volatility. As I write, the US stock market is recovering from losses last week, when AI stocks dropped amid worries regarding how they are funded.

CONCERNS AROUND ENERGY AVAILABILITY

In the UK, we see the cost concerns centred around energy prices. A 20–100-megawatt data centre project will require significant energy to power it. Liquid cooling for 100kW racks means water consumption is also a concern. The UK Government is funding some data centre projects that, as a society, we eventually pay for in tax.

The question of how to feed a gigawatt data centre remains. In Dublin, we have already seen a spotlight fall on energy consumption and water consumption, as well as the resulting restriction of new builds.

Because of the high cost of energy and real estate in the UK, we are seeing further data centre growth abroad. At Centiel, for instance, we are currently working on a 100-megawatt operation in the Americas.

DEPLOYMENT CHALLENGES

From our perspective, as a manufacturer of UPS systems, the challenges surround the deployment of huge amounts of kit ready for installation. Most data centres of this size will be

a phased deployment. However, it comes down to how production can be ramped up and fit in with the rest of the production schedule.

For Centiel, our modular production makes this more straightforward as we are not producing multiple models of UPS, which, as a result, can provide short lead times. Nevertheless, to cover their bases, we are still seeing customers pre-purchase UPS ahead of time, mindful that other manufacturers may not be able to keep up with demand.

Suddenly, the key question is now how rapidly the equipment can be delivered – again, reminiscent of the telecom/internet boom of the 90s.

TRENDS IN BATTERY CHOICES

The other trend I am seeing with large data centre installations is for lithium-ion batteries to support UPS. Lithium ion is smaller and lighter than traditional VRLA batteries for short duration runtimes and is used more for grid support opportunities. Different customers have different requirements, but many are looking for short duration autonomy for power protection just to enable them to get generators going and protect the load.

Some are installing lithium ion with a view towards future grid support and storage of energy bi-directionally to save costs – or to harness renewable energy sources like wind and solar in the future – but I am yet to see this in action in earnest. Although expensive, the good news is that lithium-ion costs are coming down as more are deployed.

Nickel zinc is now also an option for battery storage to support UPS systems. Although expensive, the inert technology is safe. Where we see US insurers driving the need to house lithium battery storage outside of the building envelope, at some point the cost of the additional building to house battery storage will cancel out the increased cost of nickel zinc.

At Centiel, our UPS technology is AI data centre ready. The modular nature of our high-availability systems also means we can easily absorb increased production into our planning capability to ensure systems are deployed rapidly.

In the rush to build AI data centres, only time will tell how we can ensure they have enough energy to feed them. As for the UPS supply, we are ready.

Centiel, centiel.com

Environmental monitoring experts and the AKCP partner for the UK & Eire.

How hot is your Server Room?

Contact us for a FREE site survey or online demo to learn more about our industry leading environmental monitoring solutions with Ethernet and WiFi connectivity, over 20 sensor options for temperature, humidity, water leakage, airflow, AC and DC power, a 5 year warranty and automated email and SMS text alerts.

projects@serverroomenvironments.co.uk

MAKING AI LOADS GRID-SAFE:

REDESIGNING UPS STRATEGY FOR A NEW POWER ERA

Ricardo de Azevedo, Chief Technology Officer at ON.energy, explains why traditional UPS designs are falling short as AI campuses overwhelm utility systems.

AI data centres have quickly become the largest and fastest-ramping electrical loads ever connected to the grid. Multi-hundred-megawatt to gigawatt campuses, operating at high load factors and expanding continuously, are pushing utility systems beyond their limits. Fast load ramps and poor ride-through behaviour destabilise substations and feeders, degrade power quality, and can trigger protective trips that ripple across the network.

Put simply: The grid wasn’t designed for this kind of tightly packed, highly dynamic demand. These conditions have the potential to shift costs and reliability risks to other customers

and, in extreme cases, can even contribute to blackouts.

WHY ERCOT CREATED THE ‘LARGE ELECTRONIC LOAD’ CATEGORY

In the USA, the Electric Reliability Council of Texas (ERCOT), the independent system operator that manages most of the state’s power grid, created the Large Electronic Load (LEL) framework to address this new class of demand. It treats major AI and industrial facilities as grid-sensitive resources, requiring them to demonstrate

controlled behaviour during voltage disturbances and recovery events.

The LEL standard requires four things:

• First, ride-through – Large loads must stay connected through prescribed high- and low-voltage events (HVRT/LVRT) and avoid abrupt megawatt drops.

• Second, disciplined ramps – The point of interconnection (POI) profile must be ratelimited (typically to 20% per minute), both in modelling and in operation.

• Third, power quality and stability – Projects must analyse and mitigate harmonics, flicker, and control interactions, backed by credible dynamic models.

• Fourth, evidence – ERCOT verifies behaviour through Model Quality Tests (MQT) to ensure PSCAD or PSSE models match actual POI performance.

These expectations are bringing rigour to how large loads connect, but they’re also exposing the weak points of today’s standard data centre and industrial power designs.

THE WEAK LINK: TRADITIONAL UPS AND FACILITY POWER DESIGN

At the heart of the issue lies the traditional uninterruptible power supply (UPS). Originally designed to protect IT systems, legacy UPS systems can react poorly to grid events. When upstream voltage sags, they often transfer off the grid, instantly dropping load at the POI. To ERCOT, that looks like a sudden load drop, exactly the opposite of the desired ride-through requirements and controlled ramps.

Attempts to compensate with parallel batteries or microgrids don’t actually address the problem. A parallel-connected BESS doesn’t sit in the current path to the facility, so it can’t filter upstream transients or guarantee ride-through for downstream equipment.

Microgrids and hybrid generator systems face similar limitations. Turbines and engine-gensets have mechanical ramp constraints; they cannot follow the fast load steps common in AI and compute clusters. Unbuffered load swings cause frequency deviations, torque oscillations, and trips.

THE INTERCONNECTION BOTTLENECK

Beyond technical performance, the interconnection process has become a roadblock. Every new large load arrives with a unique electrical design, forcing utilities and regulators to repeat the same protection studies and dynamic modelling from scratch. PSCAD and PSSE reviews pile up in queues already stretched thin by staff shortages and record application volumes.

The absence of a standard, verifiable behaviour envelope is slowing interconnection down. What’s missing? A common technical definition for ‘grid-safe’ operation that sets clear ramp limits, voltage and frequency ride-through windows, harmonic boundaries aligned with IEEE 519, and auditable model evidence. Without that, system operators are left to treat each load as a one-off.

Whether interconnecting under ERCOT’s LEL rules or operating off-grid, the underlying physics are the same. Fast, deep load swings will continue to collide with systems designed for gradual change. A practical design must stay connected through voltage events, rate-limit what the POI or generator sees, filter transients and harmonics at the facility boundary, and prove all of this with credible, validated models.

THE ON.ENERGY APPROACH

ON.energy’s AI UPS is built to meet these challenges. Installed in series at medium voltage, it forms a controlled electrical buffer between the grid or generator and the whole facility. It maintains ride-through under HVRT and LVRT conditions, enforces the 20%-per-minute ramp profile, filters transients and harmonics before they propagate either way, and provides validated PSCAD models that are designed to meet ERCOT’s Model Quality Tests.

By addressing the behaviour at the electrical interface rather than relying on protection workarounds, this approach creates a predictable, verifiable, and repeatable way to make large electronic loads grid-safe and generator-stable.

ON.energy, on.energy

Powering data centers sustainably in an AI world

Artificial Intelligence (AI) - Friend or Foe of Sustainable Data Centers?

Data centers are getting bigger, denser, and more power-hungry than ever. Artificial intelligence’s rapid emergence and growth (AI) only accelerates this process. However, AI could also be an enormously powerful tool to improve the energy efficiency of data centers, enabling them to operate far more sustainably than they do today. This creates a kind of AI energy infrastructure paradox, posing the question: Is AI a friend or foe of data centers’ sustainability?

In this Technical Brief, Hitachi Energy explores:

• The factors that are driving the rapid growth in data center energy demand,

• Steps taken to mitigate fast-growing power consumption trends, and

• The role that AI could play in the future evolution of both data center management and the clean energy transition.

HOW CRITICAL POWER SYSTEMS ARE ADAPTING TO AI

Domagoj Talapko, Business Development Manager at ABB Electrification, explains how new UPS strategies are helping meet growing power and reliability requirements.

The growth of artificial intelligence (AI) workloads is changing data centre power requirements. Industry forecasts indicate that generative AI could require more than 100 GW of power by 2028, up from around 10 GW in 2024. This shift reflects the fundamental difference between AI and traditional computing workloads.

AI workloads using GPUs consume more power than traditional CPU-based computing. Where a large data centre might have operated at 30 MW, current AI-focused builds target 150 MW, with future facilities planned

at 500 MW. Some facilities now exceed one gigawatt in total capacity.

The rack density is also higher. While traditional data centres operate at power densities of 5–10 kW per rack, AI facilities can operate at 30–50 kW per rack. Some high-performance computing clusters are already exceeding 100 kW per rack and one megawatt per rack is on the horizon.

Workloads are different too. AI can generate sudden, unpredictable power spikes, which require fast, resilient UPS systems and flexible distribution.

THREE KEY AI PRIORITIES

The top priority remains reliability. AI training runs can take weeks or months, and a single power interruption can result in massive computational losses and costs.

Power density management comes a close second. Concentrating so much power in small footprints creates thermal and electrical challenges that traditional solutions were not designed to handle.

Scalability is the X factor. With AI infrastructure growing exponentially, power protection systems must be able to scale while maintaining the availability of ongoing operations.

Recent deployments demonstrate how closer collaboration is driving innovation that addresses these priorities.

In the US, for example, Applied Digital has expanded its partnership with ABB for its Polaris Forge 2 campus in North Dakota, a 300-megawatt facility phased across two 150-megawatt buildings scheduled to come

online in 2026 and 2027. The collaboration differs from conventional projects in structure: Rather than completing design work before engaging with equipment suppliers, ABB and Applied Digital teams collaborated from the start alongside architects and other engineers. This not only enabled them to introduce a medium-voltage architecture with MV UPS and power distribution, it also gave them the opportunity to optimise technical requirements from the outset.

EFFICIENCY DELIVERS COST AND CARBON SAVINGS

AI data centres consume significant amounts of energy, so every efficiency improvement translates into valuable cost and carbon reductions. Modern UPS systems can achieve efficiency levels of 96–99% in eco-mode or online mode, compared with 85–92% for older technologies.

Beyond the UPS itself, intelligent power distribution systems minimise conversion losses, while advanced cooling integration optimises the entire thermal management system. Predictive analytics identify inefficiencies before they impact operations, allowing operators to maintain peak performance across the infrastructure.

SCALABILITY AND FLEXIBILITY NOW DRIVE DESIGN DECISIONS

Organisations planning new AI data centres should start with scalability and flexibility as core design principles. AI technology is

evolving and power infrastructure must be able to adapt.

Modular, scalable solutions allow growth without major overhauls. Systems with proven reliability in high-density applications reduce risk during critical deployments. The technical and strategic decisions that shape longterm performance require understanding both electrical engineering and AI application requirements. Our team at ABB provides frameworks and case studies from successful deployments to support this planning process.

ABB, abb.com

ON THE GROUND AT SCHNEIDER ELECTRIC’S INNOVATION SUMMIT IN COPENHAGEN

Joe from DCNN recently attended Schneider’s flagship event in Denmark, where the company hosted a series of keynote sessions, press briefings, and technical deep-dives. As part of his trip, Joe also interviewed Steven Carlini, Chief Advocate, Data Center and AI at Schneider Electric. Here’s our round-up of how it all unfolded.

Last October, I was invited to Copenhagen for Schneider Electric’s annual Innovation Summit, a two-day gathering held at the heart of Denmark’s energy ecosystem. The event brought together more than 5,000 professionals, policymakers, and C-suite leaders to explore the future of energy and digital infrastructure, and this scale was certainly felt within the bustling, lively atmosphere around the Bella Center.

In his opening keynote, CEO Olivier Blum (pictured above) took to the stage to outline a vision for energy that is “intelligent, resilient, and sustainable.” The emphasis was clear, setting the tone for the Summit as a whole: Electrification, automation, AI, and digitalisation are reshaping how we build and operate data centres and wider industrial ecosystems.

KEY THEMES AND CONVERSATIONS FROM THE EVENT

Across the main stage, breakout sessions, and exhibition hall, speakers explored how Schneider’s EcoStruxure and partner-driven solutions are enabling smarter, more efficient power and cooling infrastructures for today’s most demanding workloads. A highlight was the discussion around AI-ready data centre architectures and the concept of so-called “AI factories” that couple massive GPU clusters with advanced thermal management.

Beyond just data centres, Copenhagen was alive with conversations on decarbonisation, digital twins, grid edge resilience, and the power of ecosystems in accelerating sustainable

transformation. The innovation hub and exhibition gave us hands-on access to the latest electrification platforms, automation tools, and AI-enabled operational technologies, making complex tech tangible for all attendees from Schneider’s various markets.

A SIT-DOWN WITH STEVEN CARLINI

As part of my visit, I was able to gain some time with Schneider’s Steven Carlini to unpack what the prevalent themes from the event mean for data centre operators, as well as discussing the technologies shaping the industry’s future. Here are the key moments from our conversation:

Joe: Thank you for taking the time to speak with us today, Steven. How are you finding the Innovation Summit so far?

Steven: It’s been great! It’s been interesting to get not just the data centre perspective, but the ‘entire perspective’ of all the different businesses [that Schneider works with]. I think Schneider goes out of its way to try to balance it all.

But regarding the event overall, I’m just shocked at how many people are here and the engagement of them all. It’s not like they’re ‘just here’; they’re very, very engaged. There’s lots of good discussion going on.

Joe: Have there been any standout discussions or points that you’ve picked up on?

Steven: I’ve had a few customer meetings and one of the customers was talking about different architectures for data centres: adding batteries, eliminating generators, and moving workloads around. That was one of the more interesting things.

The NVIDIA people that were here are trying to get a handle on the different CDUs and how to deploy the different versions. They were wondering about the one-and-a-halfmegawatt CDU and what CDU they’ll use for their next generation – developing larger CDUs to run the bigger pods, or multiple pods.

It’s also amazing how many people already know about this 800 volts power that we announced. We’ve been talking about liquid cooling for a long time and that’s finally starting to happen, but 800 volts is kind of new, and a lot of people already knew that was going on.

The model data centre in Schneider’s ‘innovation hub’

Joe: I noticed sustainability has come up a lot throughout this event. Do you think there’s pressure to decarbonise as quickly as possible, or is it more about thinking long term?

Steven: Two years ago, I would say that was the focus. When we had this conference back then, we were talking more about sustainability. This time, we’re talking more about the AI race. There is still a need to decarbonise, and a lot of companies are focused on that, but there are a lot of companies that are not like they were before. There were a lot of data centres which were actually holding up construction because they didn’t have the approved green solutions. Now you don’t see that anymore. It’s “how do we deploy as fast as possible?” We’re seeing things like bridge power, which is power we can deploy now that may not be as sustainable or as carbon-free as we want, but we can use it for now until we get other things in place. You’re starting to see this kind of ‘downplaying’ of net zero commitments.

Joe: Looking to the future, what excites you the most about where we’re heading in the data centre world?

Steven: The move to SMRs – a solution that runs on recycled uranium – is really exciting. The thing that was always going to hold it up was there wasn’t a refurbishment plant for this uranium, and you must refurbish it before you can use it. But OKLO is now building one of those, spending $1.6 billion (£1.2 billion). I think we’re going to have at least 50 SMRs in operation by 2030; Google has an agreement with NuScale [and] the data centre companies are dying for it. In data centre alley, they just can’t get any more power into the locations they want. To build a substation or a high voltage line [with traditional power plants] could take up to 15 years, but SMRs could be the golden ticket.

It’s an exciting time. We went through decades of almost nothing, and now it all comes at once!

Schneider Electric, se.com

An ‘expert learning session’ on powering DeepL’s AI models

MITIGATING MALICIOUS DATA CENTRE ATTACKS THROUGH A TRUSTED COMPUTING APPROACH

Dennis Mattoon, Co-Chair of the Trusted Computing Group’s (TCG) Data Centre Work Group, explores how trusted computing standards aim to protect facilities against increasingly sophisticated hardware and firmware attacks.

Across the globe, data centre buildouts continue to increase in both pace and scale. Within the United States, regulations are being de-emphasised in order to expand the use of federal and brownfield sites for data centre projects across the country. At the same time, projects such as ‘Stargate’ – a private-sectorled venture which will invest approximately $500 billion (£378 billion) in capital expenditure to develop new data centres and create 100,000 jobs over the next four years –continue to be announced and actioned.

A similar picture is being painted across Europe, with cities such as London and Frankfurt emerging as major data centre hubs. In fact, the number of data centres within the UK is set to increase by almost a fifth in the coming years, in part due to the surging demand from artificial intelligence and cloud

computing. As data centre projects continue to be rolled out, it’s essential that security doesn’t become an afterthought in the race to get these facilities up and running.

THE GROWING THREATS

It’s clear that data centres will continue to play a crucial role for business operations across the globe, but it’s for this reason that they remain prime targets for cybercriminals. Essentially, as the number of data centres across the world grows, so too does the threat landscape. There are a range of attacks hackers can level against data centres. These include Distributed Denial of Service (DDoS) attacks, where a hacker floods servers with massive amounts of irrelevant traffic in order to overwhelm resources and render services unavailable to authorised users. If they are

able to gain access to critical data centre infrastructure, attackers can also encrypt files and hold an operator to ransom in exchange for the decryption key, while application-layer attacks that exploit vulnerabilities such as SQL injections are also commonplace.

In June 2025, internet security provider Cloudflare announced it had blocked the largest DDoS attack in recorded history, with one of its clients hit by a massive cyberattack that saw its IP address flooded with 7.3 Tbps of junk traffic. This was followed by an even larger 22.2 Tbps attack in September. Attackers aimed to exploit the User Datagram Protocol, which would enable them to send traffic to all ports of their target, overwhelming the company’s resources. While not explicitly covered as an attack against a data centre, it illustrates how network infrastructure associated with co-location and data centre services can be significantly hampered by this attack type.

To overcome these threats and mitigate any downtime, it’s essential that operators are given the tools to enhance the security measures found within their facilities. This is where the latest computing standards and specifications that embrace trusted computing can make a difference.

THE NEED FOR TRUSTED COMPUTING

For example, operators can leverage a Trusted Platform Module (TPM), which provides a hardware-based Root-of-Trust (RoT) for data centre servers. A TPM, integrated within their infrastructure, can be used to prevent any malicious actors from compromising systems at both the hardware and firmware level, which software alone can’t protect.

During start-up, the TPM will measure and record the boot code, then compare these measurements to a known, trusted configuration. If unauthorised changes are discovered, it can then halt the boot process to prevent malicious software from loading.

The TPM also provides a secure environment for the generation, storage, and management of cryptographic keys used for data encryption. However, this RoT isn’t a one-size-fits-all solution to data centre security: An interposer may still be able to position themselves between the Central Processing Unit (CPU) and the TPM, thus being able to cause significant damage by gaining possession of the legitimate control signalling between the two.

In fact, interposers can even inject their own boot code into the CPU and wield an authorisation key to fool a remote verifier and make the TPM attest the integrity of fraudulent information. This allows them to snoop, suppress, and modify vital signals and measurements in order to access and exploit secrets and information from within the data centre and weaponise it against the operator.

ENSURING GREATER SECURITY

To this end, organisations such as the Trusted Computing Group (TCG) are actively working to establish additional trust within systems and components found in data centres, focusing primarily on developing protective measures against potential interposers within a system. A number of different attacks are being assessed in order to devise ways to avoid or mitigate them. These include the feeding of compromised boot codes

to a CPU, impersonations of a CPU to a TPM, the suppression and injection of false measurements to a legitimate TPM, and the redirection of legitimate measurements to an attacker-controlled TPM.

As a result, a TPM will be empowered to protect the resources and communication of a CPU to which it’s bound through the use of precise measurements. It will also be able to attest the measurements and correct CPU instance of a given object to a verifier.

The protection of data centres against hackers clearing Platform Configuration Registers (PCRs) in the TPM through false assertions is also in scope going forwards. These actions will enable operators to trust that components and hardware so vital to their operations are operating as they should, without the fear they may become weaponised by an attacker.

TCG, trustedcomputinggroup.org

Prof. Dimitra Simeonidou
Xi Chen,
Bell

KEEPING AI SAFE: WHY TRUST BEGINS IN THE DATA CENTRE

Cory Hawkvelt, Chief Product & Technology Officer at NexGen Cloud, outlines how sovereignty, regulation, and hardware-level security are reshaping the safeguards required for mission-critical AI workloads.

AI has become one of the most demanding and sensitive workloads to run inside modern data centres. What started as experimental pilots has quickly evolved into serious national infrastructure, from healthcare diagnostics and fraud detection to defence planning and secure communications. The UK Ministry of Defence’s recent investment in sovereign cloud services reflects this shift. AI is no longer a novelty; it is an operational dependency, and with that comes an entirely different set of security expectations.

Many organisations now recognise that the risks attached to AI do not resemble those of traditional IT systems. It is not just about preventing a server breach or limiting access to sensitive files; AI systems can be tampered with in more subtle ways. Inputs can be manipulated. Models can be influenced to behave in

unpredictable ways. Logs and prompts can reveal far more about an organisation’s operations than teams often realise. As a result, the data centre is becoming a critical line of defence, not an afterthought.

The four principal risks to AI security stem from users, lack of sovereignty, regulatory compliance, and the failure to secure the infrastructure at a hardware level.

THE RISK OF SHADOW AI

One of the most overlooked challenges is how easily AI can expose an organisation’s internal logic. A prompt might seem harmless, but the model’s output can unintentionally reveal strategies, processes, or decision rules that would never be written down in a document. This is why shadow AI has become such a growing problem.

Employees use public tools without understanding the exposure risk. This includes acts such as pasting proprietary source code or error logs into AI coding assistants, which can inadvertently leak intellectual property to external servers, or uploading sensitive data such as customer records, financial reports, and internal strategy decks to unapproved AI analytics or data visualisation tools for pattern detection or summarisation. Each of these can fatally compromise an organisation’s IT integrity and lead to major and expensive problems.

According to IBM, these user errors are now adding hundreds of thousands of pounds to the average cost of a breach. It is a reminder that AI security is as much about context as it is about data.

SOVEREIGNTY IS KEY TO AI SECURITY

A second major driver behind renewed focus on security is the question of who ultimately has access to AI workloads. The term ‘sovereignty’ is frequently used, but it is often misunderstood. It is easy to assume that if data sits on a server in the UK or in the EU, then it is protected by the laws of that region. In reality, the legal jurisdiction of the cloud provider matters as much as the location of the hardware. Suppose the operator is owned by a parent company based overseas, particularly in countries with far-reaching data access legislation, which creates a potential exposure point. For sectors such as defence, healthcare, financial services, and critical infrastructure, this is becoming a major procurement concern rather than a theoretical risk.

Data centre operators are now being asked questions that did not arise a few years ago. Customers want clarity on where operational control resides, who can access the management plane, and whether the provider is subject to foreign legal obligations. They want to know that sensitive AI workloads are not only physically local, but are also governed locally. This shift is creating new expectations for the industry and reshaping how data centres position themselves in national and regional digital strategies.

GOVERNMENT ACTION IS NEEDED

A related challenge is the accelerating pace of AI regulation. The UK is exploring a flexible, principles-based model, whilst the EU is taking a stricter, prescriptive approach. Other regions are somewhere in the middle. For global organisations, this creates a confusing and sometimes contradictory landscape.

The data centre layer can remove much of that complexity. When an infrastructure provider builds regulatory alignment into the environment itself, it gives customers a level of confidence that would be difficult to achieve independently. Rather than bolting governance onto an AI system after it is deployed, teams can work within clear, compliant boundaries from the start.

Governments need to take action to ensure compliance with the most stringent standards for AI infrastructure development, as well as its use.

SECURE HARDWARE LEADS TO SECURE AI

Security is the final piece of this puzzle, and it demands a broader view than traditional data protection. AI models do not simply store information; they also process it. They process intent, assumptions, and internal business logic. If an attacker can see those prompts or influence them, they can shape decisions in

ways that are not always obvious until damage is done. This is why secure AI infrastructure must offer more than encryption, firewalls, and monitoring; it must provide isolation at the hardware level, strict access control, continuous behavioural monitoring, and clear auditability. The goal is to protect not only the data, but also the decision-making process that relies on it.

Industry standards, such as SOC 2 and ISO 27001, provide organisations with a helpful baseline. However, in environments where AI plays a mission-critical role, the requirements often extend far beyond these benchmarks. Many of the organisations we work with require infrastructure that aligns directly with their regulatory frameworks, whether that is GDPR, HIPAA, or defence-grade guidelines. The ability to design environments around these expectations from the outset has become one of the most critical factors in the safe adoption of AI.

ADAPTING TO AI SECURITY

What becomes clear from speaking with enterprises across Europe is that the most

significant barrier to secure AI is rarely the model itself. There is uncertainty about the platform on which it runs. The organisations that move fastest are usually the ones that have the most confidence in their infrastructure. Unresolved questions about sovereignty, compliance, and control often hold back those who move slowest.

As AI becomes more deeply embedded in national and enterprise systems, the data centre community has a central role to play.

Safe AI will depend on environments that are transparent, sovereign, resilient, and built with security as a design principle rather than an optional extra.

The direction of travel is obvious. The future of AI will be defined by trust, not just innovation, and trust begins long before a model is trained; it starts with the infrastructure it relies on and the people responsible for keeping that infrastructure safe.

NexGen Cloud, nexgencloud.com

UNCOVER THE HIDDEN RISKS IN DATA CENTRE RESILIENCE

In July 2024, a lightning arrester failure in Northern Virginia, USA, triggered a massive 1,500MW load transfer across 70 data centres – handling over 70% of global internet traffic.

The result? No customer outages, but a cascade of grid instability and unexpected generator behaviour that exposed critical vulnerabilities in power resilience.

Powerside’s latest whitepaper – entitled Data Centre Load Transfer Event – Critical Insights from Power Quality Monitoring – delivers a technical case study from this unprecedented event, revealing:

• Why identical voltage disturbances led to vastly different data centre responses

How power quality monitoring helped decode complex grid interactions

What this means for future-proofing infrastructure in ‘Data Centre Alley’ and beyond

Whether you are managing mission-critical infrastructure or advising on grid stability, this is essential reading.

DATA CENTRE LOAD TRANSFER EVENT: CRITICAL INSIGHTS FROM POWER QUALITY MONITORING

REGISTER HERE TO DOWNLOAD THE WHITEPAPER

WHY DATA CENTRES NEED TO RETHINK PHYSICAL SAFETY

Brian Higgins, Founder of Group 77 – a security company that provides a full range of security assessment, planning, training, and consulting services – examines how AI, cyber-physical convergence, and rising environmental pressures are reshaping safety and security across modern data centres.

Data centres have quietly become some of the most critical – and most risk-exposed – facilities on the planet. As AI and cloud workloads surge, operators are being pushed to rethink how they protect people, infrastructure, and the surrounding environment. Safety and security are no longer “support functions”; they’re central to uptime, compliance, and brand reputation.

Modern data centres are moving away from siloed cameras and badge readers towards integrated, analytics-rich platforms. Recent industry reports highlight how AI, biometrics, and advanced video analytics are being utilised to transition from reactive incident response to predictive risk management, enabling the identification of unusual behaviour, tailgating, or abnormal patterns in real time.

ADVANCED ACCESS CONTROL

Among the key trends in data centre security is the use of biometric and multifactor access control at every layer (perimeter, building, data hall, cage) to combat credential theft and social engineering. Security cameras are now leveraging AI-enabled video analytics that can spot loitering, piggybacking through secure doors, or vehicles stopping where they shouldn’t be, and automatically notifying security operations.

These tools also contribute to enterprisewide risk management, generating operational data used for capacity planning, SLA reporting, and incident forensics. These AI models require tuning, governance, and strong privacy controls, especially where biometrics are in play.

A DISAPPEARING DIVIDE

Historically, facilities teams managed locks and doors, while IT oversaw cameras, access control, networks, and servers. That division of responsibility is disappearing.

Cloud-connected HVAC, power management, BMS, and physical security systems have expanded the attack surface, making tight coordination between cyber and physical security essential. Organisations now distribute responsibilities across multiple roles, reflecting the shift from a physical-security focus to one centred on cyber and network security. Over the past decade, these domains have continued to converge towards unified, enterprise-wide security risk management.

Yet, cyber and physical security remain distinct disciplines requiring different expertise. A coordinated, cross-functional team with clearly defined complementary roles is the most effective model.

A BLENDED THREAT RESPONSE

Joint cyber-physical security operations centres (SOCs), where building alarms, OT events, and cyber alerts are monitored together, allow security personnel to correlate anomalies quickly. For example, if a doorforced-open alarm in a remote data hall correlates with network anomalies on the same VLAN, the SOC can escalate the event as a potential blended intrusion rather than treating them as unrelated alerts.

Security teams should treat converged threats – such as a physical intruder planting a hardware implant or a cyberattack disabling cooling systems – as a single domain. Zerotrust principles should also extend into the physical realm, including network segmentation for security devices, strong identity controls for contractors, and strict governance over remote access to OT and BMS systems. Integrating all aspects of security into a unified programme eliminates duplicated spending, improves response times, and closes exploitable gaps.

BATTERY RISK MANAGEMENT

To support 24/7 AI and cloud workloads, operators are aggressively adopting lithium-ion (Li-ion) battery systems and large-scale BESS that, while being more efficient, also introduce fire and explosion hazards. Updated fire and electrical codes now require enhanced gas detection, ventilation, sprinkler coverage, blast mitigation, and risk assessments tailored to specific chemistries. High-profile Li-ion incidents have pushed regulators and insurers to scrutinise battery safety strategies.

Specialist fire engineers and risk consultants design performance-based fire protection using advanced modelling for gas release and deflagration scenarios. Vendors that can certify Li-ion solutions to the latest codes and provide integrated gas detection, off-gassing management, and remote monitoring should be prioritised. Data centres face challenges navigating evolving standards, managing permitting delays, and ensuring first responders understand the unique behaviour of Li-ion fires.

HEALTH, SAFETY, AND ERGONOMIC RISKS

Inside the white space, risks extend beyond tripping hazards. Technicians face high noise levels, ergonomic strain, chemical exposures, and electrical hazards. Industry guidance stresses the need for robust ‘Environmental, Health, and Safety’ programmes tailored to data centres.

Best practices include formalised lockout/ tagout and arc-flash programmes, especially as power densities climb and more facilities adopt medium-voltage distribution. Design projects should prioritise ergonomics, incorporating lifting aids and workflow-friendly layouts.

Private security recognises the need to address psychological safety and fatigue in 24/7 operations. With AI tools emerging, integrating EHS data into the same dashboards used for operational monitoring ensures near-misses and unsafe conditions are treated as critical signals. Mobile apps for incident reporting, sensor-based monitoring of environmental conditions, and AI-assisted analysis of near-miss data can help predict where injuries are likely.

ESG, ENVIRONMENTAL IMPACT, AND COMMUNITY SAFETY PRESSURES

Data centres consume roughly 4% of US electricity, and AI growth may double that by 2030. This brings climate and reputational risk as well as direct safety implications for surrounding communities through air pollution, water use, and diesel emissions.

Regulators, investors, and NGOs are increasingly linking ESG performance to safety expectations. Environmental reports now scrutinise generator emissions, cooling water discharge, and fire-suppression chemicals. Health-impact studies have quantified billions in public-health costs associated with datacentre-related power generation, pushing operators towards cleaner power, advanced filtration, and better community engagement.

Some hyperscalers position renewable energy and high-efficiency hardware as both sustainability and resilience measures.

Security professionals can leverage this focus by participating directly in site selection and environmental impact assessments, ensuring resilience and community-risk factors are built into early design. Data centre designs should consider low-global-warming-potential suppression agents, better containment of runoff, and more efficient cooling methods such as liquid cooling.

All of this complexity amplifies a familiar problem: a shortage of skilled people. Research shows persistent labour gaps, particularly in roles requiring both IT and physical security expertise.

Security programmes can respond by creating cross-functional career paths that rotate staff through facilities, OT security, and cybersecurity roles. Tabletop exercises and full-scale drills that simulate blended incidents – cyberattacks during a fire or a simultaneous battery failure and power outage – are increasingly essential. Emergency planning technology can now model evacuation routes, fire scenarios, and physical-security design before construction.

The security landscape for data centres is being reshaped by three forces: AI-driven demand, rapid technology change in power and cooling, and rising expectations from regulators and communities. Done well, data centre safety and security becomes not a cost centre, but a source of resilience, competitive differentiation, and trust.

Group 77, group77.com

Data centres are getting bigger, denser, and more power-hungry than ever before. The rapid rise of artificial intelligence (AI) is accelerating this expansion, driving one of the largest capital build-outs of our time. Left unchecked, hyperscale growth could deepen strains on energy, water, and land — while concentrating economic benefits in just a few regions. But this trajectory isn’t inevitable.

In this whitepaper, the author explores how shifting from training-centric hyperscale facilities to inference-first, modular, and distributed data centres can align AI’s growth with climate resilience and community prosperity.

The paper examines:

• How right-sized, locally integrated data centres can anchor clean energy projects and strengthen grids through flexible demand,

• Opportunities to embed circularity by reusing waste heat and water, and to drive demand for low-carbon materials and carbon removal, and

• The need for transparency, contextual siting, and community accountability to ensure measurable, lasting benefits.

Decentralised compute decentralises power. By embracing modular, inference-first design, AI infrastructure can become a force for both planetary sustainability and shared prosperity.

REDEFINING THE EDGE: BUILDING DATA CENTRES FOR THE AI AGE

Niklas Lindqvist, Nordic General Manager at Onnec, outlines the design principles that ensure edge facilities can meet rising AI-driven performance and efficiency demands.

Artificial intelligence is no longer an idea for the future; it’s changing how businesses operate right now. As generative and agent-based AI systems grow in capability, organisations are rethinking how they store and process data. But while software is evolving fast, infrastructure is under pressure to keep up. The demand for compute power is soaring, putting strain on existing data centres. Edge computing is becoming a key solution to handling this growing need.

Yet, building more edge data centres isn’t enough. To deliver the low latency, high bandwidth, and consistent performance that AI requires, edge data centres must be carefully designed from the start. Power, cooling, and cabling need to work together as part of one holistic system. Getting that balance right is what will make these facilities reliable, efficient, and ready for the future.

AI’S GROWING IMPACT

AI is driving huge increases in compute demand across industries such as healthcare, logistics, and financial services. McKinsey expects global data centre capacity to grow by around 22% a year until 2030, with AI responsible for about 70% of that demand.

Edge data centres are at the heart of this shift. Their physical proximity to users and devices helps reduce latency and improve bandwidth, making them ideal for supporting AI applications. The global edge market is projected to reach over $300 billion (£227.7 billion) by 2026, more than double its 2020 value. But this growth can only be sustained with solid design principles focused on three essential areas: power, cooling, and cabling. These three areas are tightly interconnected. As AI workloads drive up energy consumption, managing heat becomes a critical challenge. High-density cabling and heavy power demands can create hotspots that compromise both performance and reliability. Liquid cooling is increasingly favoured, providing greater efficiency than conventional air systems. At the same time, optimising power delivery – including integrating renewable sources and reusing energy – helps lower costs, supports sustainability targets, and reduces strain on local grids.

DESIGNING FOR THE AI EDGE

Building an AI-ready edge data centre starts with understanding the IT load. Different AI models and applications place very different

demands on processing, data throughput, and latency. Getting this right early informs key decisions around power capacity, cooling systems, and network architecture.

Location also matters. The major European hubs – Frankfurt, London, Amsterdam, Paris, and Dublin (FLAPD) – still lead in connectivity and infrastructure. However, rising power constraints in these regions are driving investment to new areas. Countries such as Spain and the Nordic nations are attracting attention with abundant renewable energy, as well as more flexible planning and permitting environments.

Regulation is also increasingly shaping design choices. Schemes like the UK’s Climate Change Agreement (CCA) for Data Centres and new policies on water and energy efficiency set clear expectations, with these rules influencing everything from cooling methods to how facilities link to local power grids.

Sustainability has moved from being optional to a business priority. Operators are adopting circular design ideas, such as reusing heat for nearby buildings and recycling water in cooling systems. These measures help meet environmental targets and lower long-term costs.

Standardisation is another key factor. Using modular, repeatable designs for power, cooling, and cabling allows faster deployment and more consistent performance. Industry frameworks such as the Open Compute Project (OCP) are

helping make this approach more accessible and scalable.

Finally, cabling needs attention from the very beginning. AI infrastructure generates high power and heat, requiring cabling systems that are both robust and carefully planned. Neglecting this can lead to signal loss, overheating, and expensive retrofits. To mitigate these risks, operators should collaborate closely with suppliers, follow established standards, and integrate cabling design with power and cooling systems. When done correctly, this upfront investment ensures long-term performance, reliability, and scalability.

BUILDING FOR THE LONG TERM

An edge deployment is only as strong as the partners behind it. Choosing the right design, construction, and technology partners ensures systems are efficient, compliant, and ready to grow. AI itself may even support design processes in the future, helping to model layouts and optimise energy use. However, lasting success depends on planning, not shortcuts.

As AI continues to transform how data is processed, infrastructure becomes the true enabler. Smart, efficient edge data centres are not just part of the solution; they are the foundation for the next phase of digital innovation.

Onnec, onnecgroup.com

STRUCTURED CABLING AS AN ENABLER FOR AI-OPTIMISED DATA CENTRES

Michael Akinla, Manager, North Europe at Panduit, outlines how standards-aligned fibre design helps AI SuperPods manage rising MPO density and prepare for rapid transitions in ethernet speeds.

AI training and inference fabrics are driving data centre networks to increase line rates from 400 Gb/s to 800 Gb/s, with 1.6 Tb/s the next step on the standards roadmap. At these speeds, the physical layer becomes a first-order design constraint rather than a passive utility. Structured cabling architectures provide the deterministic fibre topology, repeatable connectivity performance, and modularity needed to scale GPU clusters without continual recabling. Panduit’s structured-cabling validation with NVIDIA 800G optics shows that, when channel specifications are respected, structured cross-connects

preserve link performance while materially improving deployment control and lifecycle operability.

The move to 800G ethernet in AI back-end networks is driving a shift from duplex LC to parallel-optic MPO connectivity. Current GPU servers and leaf switching platforms commonly present MPO-08 interfaces operating as eight duplex lanes (16 fibres) to deliver 800G aggregate throughput under IEEE 802.3df, using either 800GBASE-SR8 over multimode fibre or 800GBASE-DR8 over single-mode fibre.

Relative to LC duplex architectures, this increases fibre count per link by at least four times. When multiplied across a pod-scale deployment containing large numbers of GPU servers, total installed fibre density typically climbs towards around eight times that of a conventional data centre network. This is a direct consequence of lane-based optics and is not optional if bandwidth scaling is to be maintained.

FIBRE DENSITY IS EXPLODING

The rapid adoption of 400Gb/s and 800Gb/s network speeds requires significantly more fibre links. AI clusters rely on APC multi-fibre MPO connections for server to leaf links and your more traditional single-mode MPO connections for leaf to spine links, which means fibre volumes have increased exponentially. Without a structured approach, data centres risk excessive cable congestion, increasing difficulty in maintenance and reducing airflow optimisation.

LATENCY CONCERNS ARE LARGELY A MYTH IN AI SUPERPODS

AI back-end networks are typically deployed as tightly clustered SuperPods with short optical reaches, often less than 50 metres. Propagation delay across this distance is well below 250 nanoseconds, so any incremental length in a structured topology is negligible compared with the dominant latency situations: switch serialisation, buffering, and FEC decode latencies. Panduit’s measurements confirm that structured cabling adds optical interfaces but does not create meaningful latency overhead at SuperPod distances, and that controlling slack in structured pathways can actually reduce

avoidable propagation length that accumulates in direct-connect builds.

Structured cabling adds discrete connector interfaces, so the relevant engineering question is optical loss budget rather than latency. IEEE channel models explicitly allocate connectivity loss headroom, approximately 1.5 dB for multimode channels and ~2.5 dB for single-mode channels in standards-compliant designs.

Panduit’s testing with NVIDIA 800G SR8 and DR8 transceivers shows that when connector losses are maintained within these allocations and standard cleanliness and inspection regimes are followed, structured cabling links remain within Bit Error Rate (BER) and power-margin limits required by IEEE 802.3df.

Ilustrates a rail-optimised topology configuration. Yellow traces show the signal patch to compare latency when using (a). The Leaf and Spine switches and (b) The NVLINK and Leaf switches.

The result is straightforward: Structured cabling is channel-safe for AI workloads when engineered to spec, with performance governed by loss control, connector quality, and disciplined maintenance, not by topology choice.

IMPLEMENTING STRUCTURED CABLING FOR AI WORKLOADS

To maximise ROI and ensure reliable operation through multiple speed generations, structured cabling for AI fabrics should be designed around five practical requirements:

1. Scalability and modularity

A modular cross-connect architecture using high-density MPO trunks and patching enables deterministic scaling without re-pulling backbone fibres. This approach supports iterative pod growth and incremental speed adoption while maintaining stable fibre routing and polarity schemes.

2. Optimised pathways and fibre management

High fibre density is a mechanical and thermal issue as much as a connectivity issue. Structured cabling consolidates multiple links into high-count trunks and terminates into patch fields, sharply reducing pathway fill, improving cable-tray serviceability, and preserving airflow.

3. Reliability and reduced downtime

Structured systems enforce bend-radius protection, separate fixed trunks from equipment cords, and anchor slack in controlled storage zones. When combined with rigorous labelling and documentation in line with TIA practices, this produces traceable circuits, lowers handling-related fault rates, and reduces MTTR by speeding physical-layer isolation.

4. Future-ready, high-density connectivity

Using 16-fibre/8-lane MPO channels for 800G establishes a standards-aligned physical baseline compatible with 400G while enabling a clean uplift path to higher lane rates. Base-8 structured topologies preserve this forward compatibility and minimise stranded assets during generational upgrades.

5. Energy efficiency and sustainability

Because AI SuperPods are short-reach environments, multimode fibre is often a viable physical layer and supports lower-power multimode optics (up to 15% lower) compared with equivalent single-mode transceivers. Panduit positions structured, reusable fibre backbones as part of long-term efficiency strategy by reducing recabling churn and enabling more controlled lifecycle upgrades.

Panduit, panduit.com

23-24 SEPTEMBER 2026 NEC BIRMINGHAM, UK

A GAME CHANGING EVENT FOR THE ELECTRICAL CONTRACTING INDUSTRY

Covering everything from Lighting and Wiring Accessories to EV and Renewables.

ECN Live! 2026 is an industry game changing trade show for electrical contractors, electricians, buyers, specifiers, and key stakeholders in the electrical contracting sector. Gain valuable insights into the most valuable aspects of electrical contracting, such as lighting, circuit protection, cable management, HVAC, fire safety, EV, renewables, solar and more! ECN Live! will be colocating with EI Live!, the UK’s only dedicated smart home show, an already established show, now in its 15th year. Given the synergy between these two markets, electrical contractors and electricians will gain valuable knowledge about the world of smart home integration and add another string to their bow as an installer. Don’t miss this innovative co-located event that defines the standard for connection, knowledge sharing, and solutions tailored for today’s electrical professionals.

HOW IMDC s REDEFINE ENERGY MANAGEMENT FOR AI

Carsten Ludwig, Market Manager Data Center at R&M, suggests how IMDCs can deliver the energy efficiency needed to support high-density AI workloads.

Integrated modular data centres (IMDCs) offer an energy-efficient foundation for AI workloads by combining smart cooling and power distribution with compact designs.

An ‘integrated’ approach, where all units fit perfectly with each other in terms of capacity and performance, reduces PUE, supports high-density edge deployment for AI and machine learning, and enables rapid scaling. By unifying infrastructure – from cooling to power and software – IMDCs maximise performance, lower costs, and meet demanding AI requirements sustainably.

AI workloads challenge data centre energy efficiency due to their high power density, unpredictable usage patterns, and intensive cooling demands. GPUs and AI accelerators generate concentrated heat, requiring advanced liquid or hybrid cooling, while large models run for extended periods, keeping systems at full power. Compute cycles that operate in bursts and redundancy both further inflate energy use. What’s more, edge AI

deployments often lack efficient infrastructure. Legacy systems struggling to support modern workloads compound the problem. Traditional data centres often take 18–24 months to build and are burdened by rising construction costs and supply delays. Together, these factors increase PUE, strain resources, and undermine sustainability goals. This is driving the need for integrated, modular, and AI-optimised data centre solutions that can scale efficiently.

HOW IMDC s PROVIDE A SOLUTION

What distinguishes an IMDC is the orchestration of all components – cooling units, power modules, racks, containment, and management software – to operate together as a coherent yet flexible system. Standardised interfaces ensure capacity across modules, while monitoring and orchestration platforms provide centralised dashboards for PUE, thermal conditions, and energy use. Predictive maintenance uses AI to identify inefficiencies,

adjust parameters, or trigger interventions before issues arise.

This orchestration enables holistic energy optimisation, shifting the focus from component-level gains to system-level performance. As workloads are optimised within a hybrid data centre topology, overall energy consumption can be reduced. Leveraging liquid and hybrid cooling systems alongside AI-based operations can slash cooling-related energy use by up to 40%.

It’s possible to have prefabricated, highly engineered units up and running in weeks or months while using modular design to achieve remarkable energy efficiency and system integration. Pre-engineered components ensure consistent quality, whilst also reducing site construction needs, supply chain complexity, labour requirements, and exposure to fluctuating material costs. These factors converge to yield rapid ROI and a smaller carbon footprint compared to ‘traditional’ data centres. IMDC adoption reshapes TCO in several dimensions:

1. CapEx – There are lower upfront costs thanks to factory build and modular scaling.

2. OpEx – AI-controlled cooling and power trim energy bills and predictive maintenance cuts downtime and manual oversight.

3. Deployment speed – ‘Planning-to-live’ shrinks from years to months or weeks.

4. S upply resilience – Pre-approved suppliers and standardised modules reduce risk.

5. Future-proofing – Integrated with edge and AI-ready frameworks, IMDCs are optimised for evolving computational strategies.

COOLING INNOVATIONS AND POWER OPTIMISATION

Cooling accounts for up to 40% of a data centre’s total energy consumption. IMDCs can dramatically reduce cooling energy in several ways, with liquid cooling and hybrid systems playing a key part in this:

Direct-to-chip liquid cooling handles high-density racks (100–200 kW) efficiently, syphoning heat where air-based methods struggle. Smart airflow management relying on containment, intelligent venting, and real-time adjustments minimise recirculation and overcooling. AI-driven systems equipped with sensors and analytics adjust cooling parameters dynamically, optimising temperature and energy use continually. These innovations can cut cooling energy consumption by up to 40% compared to conventional data halls, pushing PUE closer to the 1.1–1.2 range.

IMDCs also integrate modular UPS systems that align capacity with workload demands. Traditional, monolithic UPS installations often operate inefficiently below peak output. In contrast, IMDCs scale UPS modules dynamically, ensuring power conversion operates near optimal load. This modularity streamlines capital investment and trims energy losses. Furthermore, by designing the entire infrastructure – cooling, power, racks – in a unified, capacity-matched ecosystem, IMDCs eliminate overprovisioned assets and associated ‘energy leaks’ that plague conventional designs.

SUPPORTING HIGH-DENSITY

AI DEPLOYMENTS AND SUSTAINABILITY

By decentralising compute, organisations can support time-sensitive AI and ML tasks closer to where data is generated and consumed. AI workloads are increasingly distributed – from hyperscale clouds to the network edge. IMDCs are ideal for deploying edge AI infrastructure due to their compact and rugged design, making them suitable for less-controlled environments with a minimised footprint. Rapid, plug-and-play setups support localised, low-latency AI inference and training without reliance on distant hubs. Built-in monitoring and AI-based control systems enable real-time operations across multiple distributed units.

Modularity also accelerates exponential scaling. Prefabricated IMDC units can be deployed sequentially as demand grows, avoiding wasted space and upfront CapEx. This modular growth path is complemented by sustainable features: IMDC sites can be equipped with solar, wind, or hybrid remote power systems. Zero-water or low-water cooling can be implemented, which is particularly important in regions facing water scarcity. Prefabricated recyclable or reusable modules can be disassembled,

upgraded, and redeployed, aligning with circular economy principles.

BUILDING BLOCKS FOR THE FUTURE

Of course, adoption does not come without hurdles. Some locations may restrict shipping container deployment, for example, and clear interoperability standards need to be in place. Centralised dashboards must be able to scale with distributed deployments for monitoring consistency.

However, we can safely say that IMDCs are pivotal for sustainable AI infrastructure, offering energy-efficient cooling, power-smart distribution, modularity, and AI-driven operations. Global IMDC value will hit $47.7 billion (£35.3 billion) by 2033, with North America leading at ~33.1% of the market in 2024.

For organisations scaling edge AI workloads, IMDCs can help reduce PUE, control costs, cut carbon footprints, and support flexible deployments. As AI proliferates, prefabricated, intelligent units emerge as foundational building blocks for a greener, smarter digital future.

R&M, rdm.com

DEBOTTLENECKING THE GRID: DESIGNING POWER TRAINS THAT KEEP AI MOVING

Arturo Di Filippi, Offering Director, Global Large Power at Vertiv, explains how modular, grid-interactive power systems are redefining efficiency and resilience in an era of constrained energy supply.

In many regions, power is becoming one of the biggest constraints on data centre growth. Regions that once welcomed new facilities are now running into capacity limits, and grid infrastructure cannot be built fast enough to meet the pace of digital expansion, particularly in markets driving artificial intelligence.

The question for data centre managers is no longer only how to deliver reliable power inside the facility, but how to manage energy availability beyond its walls. The answer lies in how we design and control the data centre power train: the complete sequence from the energy source to the processor.

THE NEW GRID REALITY

In many regions, the grid is reaching its practical limits. In the UK, Ireland, the Netherlands, and parts of the United States, connection queues can run into years. At the same time, demand continues to grow: AI training clusters, digital

twins, and high-performance computing are all increasing peak load density and variability.

Grid strain is most acute for a few dozen hours a year, yet those hours can delay new builds for months or years. The challenge is to maintain reliability without waiting for major infrastructure upgrades.

BUILDING FOR FLEXIBILITY

The first step towards greater flexibility is to treat the power train as a dynamic system, not a fixed design. Traditional topologies were built around predictable, relatively uniform workloads. Today, workloads change by the second and power systems must adapt accordingly.

A modular, scalable architecture allows operators to expand capacity or redundancy as needed. Compact uninterruptible power supply (UPS) modules that can operate in parallel are one example. They allow incremental growth without downtime and avoid the inefficiencies of over-provisioned systems.

Prefabricated and integrated power modules go further. By combining switchgear, UPS, batteries, and cooling within a single skid, they can reduce deployment time by up to 50% and make it possible to locate power capacity wherever the site layout demands.

GRID-INTERACTIVE POWER

The most significant advancement in recent years has been the emergence of grid-interactive UPS systems. These units are designed to communicate directly with the grid, responding to frequency and voltage signals in real time. When demand spikes, the UPS can temporarily supply part of the load from its batteries, reducing the draw on the grid. When demand drops, it can recharge or even return power to stabilise voltage. This kind of interaction transforms the data centre from a passive consumer into an active participant in grid management.

The control electronics behind these systems are increasingly sophisticated, enabling seamless transitions between grid and battery power without affecting IT load. They can also work alongside local renewable generation or battery energy storage systems (BESS) to form a microgrid.

INTEGRATING RENEWABLES AND LOCAL GENERATION

Renewable energy integration has become another important strategy for debottlenecking. On-site solar, wind, or combined heat and power (CHP) can reduce the load on the grid and provide additional resilience.

However, renewables are intermittent by nature. Without careful integration into the power train, they can introduce voltage fluctuations or harmonics that affect sensitive equipment. Advanced power conversion and control systems are essential to manage these interactions.

An electrical power management system (EPMS) provides the necessary intelligence; it monitors consumption and quality across the critical infrastructure and coordinates switching, load balancing, and fault response automatically. When properly configured, an EPMS can help prevent overloads, maintain efficiency, and provide the visibility operators need to plan long-term improvements.

EFFICIENCY UNDER SCRUTINY

Efficiency is no longer just a measure of good engineering; it is becoming a regulatory requirement. Power usage effectiveness (PUE) targets are tightening worldwide, and many

jurisdictions are moving towards reporting obligations for energy performance and waste heat reuse.

Reducing losses within the power train is one of the most effective ways to meet these goals. Higher voltage distribution, low-loss cabling, and intelligent load balancing all contribute. Even small improvements in conversion efficiency can yield substantial reductions in total energy consumption when scaled across hundreds of racks.

In parallel, the environmental impact of standby generation is being reconsidered. As more data centres adopt grid-interactive systems and battery storage, dependence on diesel can be reduced. In many cases, it is now possible to avoid running generators entirely during short grid interruptions, using stored energy instead.

ENGINEERING COLLABORATION

Designing a power train that can operate effectively under these conditions is a multidisciplinary task. Electrical engineers, mechanical specialists, and energy managers need to collaborate from the outset.

Every decision (from busway layout to control strategy) influences efficiency and scalability. The most successful projects start with clear performance objectives: required uptime,

target PUE, and carbon reduction goals. From there, the design process becomes one of balancing those parameters against cost and time to deploy.

For operators managing global portfolios, a vendor-agnostic approach is valuable. It allows the same design principles to be applied in different markets while tailoring equipment choices to local standards and grid conditions.

LOOKING AHEAD

As critical digital infrastructure expands, power management is set to become one of the defining engineering challenges of the decade. The data centres that thrive will be those that treat power not as a constraint, but as a controllable resource.

Grid-interactive systems, modular designs, and intelligent controls are creating a new kind of power train, one that’s efficient, flexible, and capable of supporting the next generation of AI-driven workloads.

The race to expand digital capacity will continue, but so will the pressure on the grid. The only efficient path forward is smarter integration between the two. Data centre engineers who can design that link will not only help their organisations grow; they will help keep the entire digital ecosystem running.

Vertiv, vertiv.com

THE HIDDEN ENERGY COST OF CONTAMINATION IN DATA CENTRES

Stephen Yates, Managing Director of IT Cleaning, outlines the relationship between cleanliness, airflow dynamics, and energy efficiency in data centres.

Data centres are the backbone of global digital infrastructure and amongst the most energy-intensive building types in operation today. In the race to achieve energy-efficient, sustainable data centres, much of the focus has been on power usage effectiveness (PUE). As demand for artificial intelligence (AI), cloud computing, and high-performance processing grows, operators face increasing pressure to deliver computational capacity, reduced PUE, and sustainable performance.

While advances in cooling system design, airflow optimisation, and renewable integration have improved overall efficiency, one often-overlooked factor continues to influence operational energy consumption: environmental contamination.

From microscopic dust particles to fibre residues, contamination within technical spaces can significantly affect airflow dynamics, cooling system efficiency, and component-level thermal performance, all of which have measurable energy management impacts.

AIRFLOW EFFICIENCY AND THE PHYSICS OF CONTAMINATION

Data centre cooling systems rely on highly controlled airflow. When particulates accumulate within filters, coils, or underfloor plenums, they alter the pressure differentials designed to deliver uniform cooling.

Research from the ASHRAE Technical Committee 9.9 (2021) shows that even modest particulate buildup can increase fan energy demand by 5–10% to maintain consistent rack

inlet temperatures. This increase in fan power also disrupts PUE ratios, the key metric for operational efficiency.

Moreover, dust accumulation acts as a thermal insulator on heat exchange surfaces. A fine layer of dust as thin as 0.5mm can reduce heat transfer efficiency by 10–15%, forcing compressors to cycle more frequently and increasing cooling load. These incremental inefficiencies compound across large-scale facilities, increasing total energy consumption considerably.

THERMAL IMPACTS ON IT EQUIPMENT

Contamination affects not only the mechanical systems, but also the IT hardware itself. Microparticulate (dust) deposited on heat sinks, circuit boards, and connectors restrict local airflow and trap heat within enclosures.

Empirical studies (IEEE Transactions on Components, Packaging and Manufacturing Technology, 2020) demonstrate that contaminated components can experience 3–5°C higher junction temperatures, reducing expected service life and increasing cooling demand.

For GPU-accelerated AI workloads, where chips may operate at power densities exceeding 1,000W per unit, even a small increase in temperature can degrade performance or cause thermal throttling. This is a direct energy inefficiency that increases power demand within the computer layer.

CLEANLINESS AS A MEASURABLE ENERGY VARIABLE

Cleanliness is increasingly being recognised as an operational performance factor. The ISO 14644-1 standard provides a framework for measuring and controlling airborne particulate concentrations in controlled environments.

This standard is now being referenced in data centre contamination control guidance (Uptime Institute, 2023) and by OEMs (like NVIDIA) in their operating manuals. Facilities maintaining ISO Class 8 conditions – which limits particles at ≥0.5 µm to 3.52 million per cubic metre – report measurable stability in airflow performance with fewer contamination-related failures.

Adherence to these cleanliness levels can reduce filter pressure drops, enhance cooling uniformity, and improve fan efficiency. In energy terms, this translates to a 1–3% improvement in cooling power demand, depending on system configuration and load.

ENERGY, RELIABILITY, AND LIFECYCLE EFFICIENCY

Cooling infrastructure typically accounts for 30–40% of total facility energy consumption (Uptime Institute, 2022). Any loss of efficiency due to contamination increases this proportion, leading to both operational and environmental consequences.

In a 10 MW data centre, a conservative 2% efficiency loss from dust accumulation could result in an additional 1.75 GWh per year in cooling energy, equivalent to approximately 500 tonnes of CO2 emissions under the UK’s 2024 energy mix.

Beyond direct power costs, contamination accelerates mechanical and electronic wear. The European Data Centre Association

(EUDCA) notes that particulate accumulation can increase hardware failure rates by up to 15% over a three-year cycle. This not only drives maintenance costs, but also increases embodied carbon from equipment replacement.

DATA-DRIVEN CONTAMINATION MANAGEMENT

The convergence of environmental monitoring and energy management offers a new pathway to efficiency. Integrating airborne particle counters with building management systems (BMS) allows operators to correlate cleanliness metrics with fan speeds, cooling loads, and energy consumption in real time.

This approach aligns with ISO 50001 (Energy Management Systems), which emphasises measurable operational control. By quantifying the relationship between contamination and energy demand, facilities can predict when cleaning or filtration interventions will deliver maximum efficiency benefit.

CLEANLINESS AS A SUSTAINABILITY METRIC

In sustainability terms, contamination control contributes to multiple objectives:

• Energy reduction — through improved airflow and cooling efficiency

• Asset longevity — by maintaining optimal thermal conditions and reducing component stress

• Carbon efficiency — by extending equipment lifespan and reducing embodied emissions

Cleanliness, once treated as a housekeeping concern, is now emerging as a scientific metric of operational sustainability.

As AI and HPC workloads push the physical limits of power density, maintaining clean, controlled environments will be critical to achieving both energy optimisation and performance stability (uptime).

IT Cleaning, itcleaning.co.uk

DATA CENTRE CLEANING AND CONTAMINATION CONTROL SPECIALISTS

Protect

Invisible

IT Cleaning Ltd delivers specialist cleaning, contamination

Our expert teams provide:

• ISO 14644 Class 8 aligned technical cleaning — current best practice

• Full contamination control programmes and protocol implementation

• Subfloor, high-level, rack-level and equipment cleaning.

• GPU & AI-environment preparation and maintenance

• Protection of equipment during refurbishment

• 24/7 nationwide service

• Comprehensive ISO 14644 Class 8 certification and reporting

Whether

• Selective switching and precise protection of connected loads

• Unrivalled system availability due to redundant power distribution

• Hot plugable circuit breakers lead to reduced maintenance time

• Reduced install time due minimized wiring effort

• Economic and flexibly expandable due to modular design

ULTRA‑THIN FIBRE FOR DATA CENTRES

Amanda Springmann, Datacenter Business Development Director at Prysmian, explains how reduced‑diameter single‑mode fibre and high‑density cable designs are helping operators keep pace with rapidly escalating optical demands.

AI training clusters have pushed data centre fabrics from 400G to 800G today, with 1.6T on the horizon, meaning far more optical ports and much denser cabling per rack and per row. Data centre infrastructure – including cabling – needs to evolve rapidly to address issues related to power efficiency, heat and cooling, real estate, security, and data.

Vendors are boosting port speeds and counts for AI fabrics (for example 64×800G on 51.2T class switches) and even preparing co-packaged-/silicon-photonics to tame power, further concentrating optical I/O. High-speed optics also consume more fibres per link when using parallel lanes. A 400G DR4 uses four fibre pairs on an MPO-12; an 800G DR8 uses eight pairs on an MPO-16. Multiply that by high-radix switches and leaf–spine/fat-tree topologies and it’s clear that tray, duct, and panel space are soon exhausted.

As a result, operators are adopting smaller-diameter fibres and ultra-dense

cable constructions to get thousands of fibres through limited trays, innerducts, and patching fields while keeping bend radii and routing manageable. AI’s east–west traffic and port evolution are forcing more fibres through the same physical pathways, so more data centres are standardising on “lower-µm” fibre, high-count FlexRibbon trunks, MPO-16/MTP-16 connectivity, and high-density panels to stay within space, power, and deployment windows.

Moving from 250 µm to slimmer single-mode allows manufacturers to build smaller, lighter, higher-count cables without changing the 125 µm glass. Today, single-mode fibre as slim as 160 µm is available. Flexible ribbon designs can pack thousands of fibres into compact trunks for spine/leaf and DCI runs.

For DCI and campus interconnects, high fibre counts within standard pathways enable faster, cost-efficient build-outs and scalable capacity. In hyperscale data centres, pathway real estate and speed of deployment are especially critical.

Duct and tray space are extremely scarce. To boost inter-rack and inter-building capacity without adding new pathways, operators require cables that pack more fibres into smaller envelopes and can be installed quickly, enabling rapid turn-ups and minimal disruption.

Shrinking the coating around standard 125-µm fibres can deliver big gains where space is scarce, enabling higher fibre counts and tighter bends. When paired with bend-insensitive glass, modern coatings, and well-engineered cable designs that control micro- and macrobends, density can rise without sacrificing loss, budgets, or long-term reliability.

“Lower-µm” fibre refers to the outside diameter of the polymer coating on a standard single-mode fibre. The glass itself – the 125-µm cladding with a ~9-µm mode field diameter – doesn’t change. What has changed is the thickness of the protective coating that surrounds it. A thinner coating yields far higher fibre counts in the same space, or the same count in a slimmer tube. In practice, that means more capacity and more chances to reuse existing infrastructure instead of trenching or adding new pathways.

Thinner, lighter cabling also improves handling. Because minimum bend radius is usually specified as a multiple of the cable’s outside diameter, reducing that diameter often permits a tighter allowed bend. There are knock-on

cost benefits too: Smaller cables for the same fibre count typically use less sheath and filling compound, ship on smaller reels, and are easier to transport and install. The line-item savings may look modest, but across a large rollout they compound – and even if the fibre carries a premium, duct reuse and reduced construction frequently outweigh it.

OPTIMISING DESIGNS

Thinning the coating changes the mechanical ‘cushion’ that protects the glass: surface roughness inside buffer tubes, pressures from neighbouring fibres, thermal expansion and contraction, and accidental bends during handling. A key challenge is microbend sensitivity. The coating is the first layer that spreads localised stress before it reaches the glass; when that layer is thinner, stresses can couple more readily into the cladding, adding loss – particularly at longer wavelengths. Dual-layer coatings with tuned moduli help, as does specifying bend-insensitive glass, but cable designs must be optimised.

Tiny, irregular pressures along the fibre cause microbends that couple light out of the guided mode. A soft, low-modulus inner primary coating acts as a cushion, spreading those point loads so the glass doesn’t follow every microscopic corrugation, while a tougher outer coating provides handling strength. Keeping

the inner coating soft across temperatures is critical: If it stiffens in the cold, microbend loss rises. Tight bends (macrobends) change the mode’s effective index and allow light to leak into the cladding; a depressed-index ‘trench’ in the cladding increases confinement, pulling the field back towards the core and sharply reducing bend-induced loss. Performance depends on the trench’s depth, width, and distance from the core.

Long-term durability is another concern. The coating shields against abrasion during handling and installation; with less material between the environment and the glass, there is less margin for ageing mechanisms. That’s why serious qualification – including temperature cycling, mechanical impact, crush, and torsion testing – is essential when adopting reduced-diameter fibres. Manufacturing tolerances tighten as coatings shrink. Diameter uniformity, coating concentricity, and strip-force variability that are inconsequential at 250 µm are not inconsequential at 160 µm, because the same absolute deviation is a bigger fraction of the thickness.

CHALLENGES AND BEST PRACTICES

Data centre operators need to handle increasing volumes of fibre, often with legacy infrastructures. As cabling density increases, tray/ladder/duct fill, weight, and minimum-bend-radius

the risks of compression and microbends rise. However, deploying hundreds and thousands of fibres makes cable management very difficult. Installation takes time, and scaling up and down can be particularly challenging.

When specifying, implementing, and maintaining a data centre cabling solution, prioritise designs that fit standard ducts while increasing fibre counts. Use bend-insensitive fibres to keep attenuation low on tight routes. Prefer reduced-OD, lightweight cables to simplify handling and speed up installation in congested pathways. For DCI corridors and campus links, choose miniaturised OSP options.

Verify compatibility with mass-fusion splicing, common duct envelopes, high-density panels, and tight routing. Require demonstrably sustainable production and shipping. Finally, lock the scope to all relevant specifications and acceptance tests and maintain clear documentation to support ongoing operations.

Pair reduced coating with proven bend-insensitive performance so you can shrink cable size and boost fibre density while staying within standard loss and compatibility limits – ideal for space-constrained deployments and high-count runs.

Any good lower-µm design should meet or exceed the reliability standards of its 250-µm predecessor. In practice, that means G.657-class bend performance for low macrobend loss at small radii, well-tuned dual-layer coatings to tame microbends across temperature, and rigorously qualified cable constructions that withstand real-world stresses, backed by tight manufacturing tolerances and real data.

In short, smaller-diameter (lower-µm) coated fibres unlock capacity where space is tight – just make sure the slimmer coating comes with bend-insensitive glass, the right coating stack, and a cable design that’s been proven to last. It’s a good idea to use mature, proven high-density solutions and ask for statistical data, not just nominal values.

Prysmian, prysmian.com

23-24 SEPTEMBER 2026

EI Live! the UK’s top trade show, teams up with ECN Live! for an all-new event experience. This unique collaboration promises unparalleled insights and opportunities in a unified venue.

The residential AV and home automation industries have relied on EI Live! as a primary event for 14 years, but as the market grows and overlaps with electrical contracting, there is an opportunity to bridge these industries. To address this, EI Live! is re-locating to the NEC Birmingham for its 2026 event and will run alongside ECN Live! This joint exhibition creates a collaborative space for AV and smart home professionals to engage directly with electricians and electrical contractors, fostering synergy across these interconnected markets.

THE END-OF-LIFE BLIND SPOT THAT SLOWS DATA CENTRES DOWN

Ian Lovell, VP of Strategy and Technical Solutions at Reconext, examines how shifting risk perceptions, tighter power constraints, and proven recovery methods are reshaping end-of-life policy inside modern data centres.

Data centres stay current by staying in motion. Hardware moves through a steady rhythm of deployment, refresh, and retirement. The process is familiar; the assumptions inside it are older than the infrastructure they manage. That is where the gap lives. Most operators already know their hardware has more life in it. They see the test results. They trust the engineering. The friction sits in parts of the organisation that are slower to update than the equipment on the floor.

Teams rarely retire drives or servers because the hardware looks risky; they retire them because older internal rules make it the simplest option. Risk departments lean on destruction receipts because they trust what they know. Internal audit follows the frameworks they inherited. Legal stays with the safest language in the policy binder. None of this reflects how far the tooling has come or how confident operators now feel about verified reuse.

Recovery work has become precise. At Reconext sites, more than 1.4 million drives have passed through complete erasure and validation with a reuse yield of 97%. NAND chips that once went straight to shredding now pass through forensic erasure, reballing, and functional test. A single recovered module avoids roughly 500 kilograms of carbon and almost 200,000 litres of water compared with new manufacture. These are consistent numbers, not projections.

Hardware itself often outlives its refresh slot. When Microsoft extended server lifespans from four to six years, the company saved $3 billion (£2.2 billion). Amazon shows similar outcomes with its six-year cycle. Performance gains have slowed across several generations. For many workloads, new equipment only brings small improvements. Once operators take a clear look, the economics of recovery become obvious.

A CHANGE IN PACE

AI workloads have changed the tempo. Power is the first constraint that hits. Cooling follows, then procurement. Racks stay full. Supply chains tighten. Teams try to keep expansion schedules steady, even when the grid gives them little room. In that environment, ending the life of good hardware early creates risks that did not exist before. Every reusable board, module, and drive becomes part of how a site stays on track.

Recovery is now part of how modern data centres hold their shape. In regions where grid pressure rises, refurbished components fill the gaps. A site without spare power still has work to process. Newly recovered SSDs and boards keep that work moving. Operators use them to stabilise older clusters and bridge procurement delays. The focus is uptime, not sustainability. People inside data centres already understand this. They know when a drive tests clean. They know when a board still has life in it. They see the histories inside systems like Proteus and the audit-ready records inside an advanced analytics platform. The capability is in place.

The challenge is bringing the rest of the organisation along. Many companies have not revised their end-of-life policies in a decade or more; some still assume erasure cannot be verified. Others were written when the cost of new equipment was lower and power was easier to secure. Internal teams often support reuse; they just lack the authority to move past established habits.

This creates a blind spot at the exact moment the industry can least afford one. Hardware cycles are accelerating. AI demand shapes design choices across the entire lifecycle. Power availability is unstable in several regions. Regulatory expectations around energy efficiency and circularity continue to rise. In this environment, the cost of keeping old assumptions in place adds up fast.

ADAPT OR FALL BEHIND

A quiet shift is underway in the companies that adapt earlier. Their policies now reflect what their engineers already trust. They treat second-life provisioning as a standard part of planning. They audit reused components with the same discipline applied to production hardware. They rely less on long supply chains and avoid delays that slow critical projects. They uncover value that once moved straight to disposal.

None of this requires a major reset; it requires alignment. Engineers, operations, and infrastructure teams already understand the technical side; the data supports them. The CO2 savings are measurable. The financial upside is clear. The traceability satisfies the same controls that destruction once did. The slowdown lives in the distance between what the tools can now prove and what the policies still assume.

Modern infrastructure is built on tight margins. Capacity, power, and timelines all pull against each other. In that world, letting recoverable hardware fall out of circulation is not neutral; it affects budgets, schedules, and resilience. The teams that move past old assumptions are not chasing trends; they are reducing drag.

The blind spot at end-of-life closes once the organisation updates its view of risk. A clear path for verified reuse opens options that were not available before: It keeps hardware working. It keeps projects moving. It gives operators more control in a landscape that keeps shifting.

Reconext, reconext.com

HOW FIBRE AND CABLE DESIGN ARE ADVANCING TO MEET GLOBAL CONNECTIVITY DEMANDS

Carson Joye, Product Line Manager at AFL, examines the ultra-high-density fibre innovations reshaping hyperscale and long-haul networks.

Global data consumption continues to accelerate, creating rising demand for denser and more efficient fibre solutions. AI workloads, cloud expansion, and the growth of edge computing are driving a forecasted sixfold increase in data centre interconnect bandwidth over the next five years. These trends are setting new expectations for fibre performance and density.

This article highlights the key innovations influencing high-density fibre development and the cable designs enabling the next generation of high-performance, low-latency network architectures.

HIGH-DENSITY HORIZONS

The introduction of the 13,824-fibre cable represents a major advancement in ultra-high-density cable engineering. Designed for hyperscale environments, this construction delivers exceptional capacity within a compact, 40mm outer diameter footprint. The form

factor supports installation through standard two-inch conduits, allowing significant capacity upgrades without expanding existing pathways. For perspective, a 10,000-foot reel of this cable contains enough fibre length to encircle Earth’s equator (with margin remaining).

Additional innovation is reflected in the 16-fibre ribbon, designed to support next-generation, high-density, low-loss connectivity through Very Small Form Factor (VSFF) connectors. These constructions are optimised for both splicing and connectorised applications.

Fibre miniaturisation also plays a critical role in advancing high-density cable design. Reducing individual fibre diameters from 250 microns to 200 microns decreases overall cable dimensions by roughly 20%. Emerging 160-micron fibres introduce the potential for even greater packing density, creating new opportunities for high-capacity builds across metro, long-haul, and hyperscale environments.

LONG-HAUL DEPLOYMENT TRENDS INFLUENCING TOMORROW’S NETWORKS

Rapidly evolving data centre ecosystems (ranging from building-to-building interconnects to campus-to-campus links) require more than high-capacity fibre pathways. Long-haul networks must also expand quickly to support high-bandwidth, low-latency connectivity for hyperscale facilities supporting AI and cloud workloads.

For long-haul construction, air-blown and jetting installation techniques deliver notable advantages. Air-blown installation enables fibre to be propelled through microducts using compressed air, reducing engineering complexity and accelerating deployment timelines. AFL supports these requirements with Air-Blown Wrapping Tube Cable (AB-WTC) designs tailored for high-density applications. Recent advancements include an 11.3mm, 864-fibre AB-WTC optimised for jetting into 16/13mm microducts, as well as a 16.3mm, 1728-fibre AB-WTC designed for installation in 27/20mm microducts.

These designs rely on SpiderWeb Ribbon (SWR), a flexible ribbon structure that enables efficient mass fusion splicing while still allowing single-fibre access. SWR collapses without creating unused space inside the cable core, making the architecture highly suitable for ultra-dense environments where space and speed are essential.

MICROBEND RESISTANCE, COATING TECHNOLOGY, AND MODE-FIELD OPTIMISATION

Rising fibre densities introduce greater sensitivity to microbending, making microbend resistance a critical performance requirement. AFL’s microbending white paper describes dual-layer coating systems engineered to absorb mechanical stress while maintaining

long-term durability. This coating structure reduces signal loss caused by microscopic pressure points frequently found in compact, high-fibre-count cable constructions.

Mode Field Diameter (MFD) also strongly influences network performance. Smaller MFD values reduce bend sensitivity at key wavelengths such as 1310 nm, improving attenuation stability across hyperscale, metro, and AI-driven environments. AFL’s MFD white paper explores how fibre geometry shapes bandwidth performance, signal integrity, and overall system resilience.

Next-generation fibre technologies will continue to emphasise density, higher throughput, and latency reduction. Emerging advancements such as multi-core fibre (designed to route multiple optical pathways through a single strand) are positioned to increase fibre density. AFL, supported by leadership from Fujikura, remains committed to developing solutions that anticipate global connectivity requirements and strengthen the performance of increasingly demanding digital ecosystems. AFL,

GLESYS ACQUIRES VERNE’S FINNISH CLOUD OPERATIONS

Verne has announced that Glesys will acquire its managed private cloud operations in Finland, including data centre facilities in Pori and Tampere.

For Glesys, the acquisition expands its cloud and IaaS offerings in the Nordic region. For Verne, it allows the company to focus on colocation for AI and enterprise workloads, including its expansion plans in Mäntsälä, announced earlier this year.

Glesys and Verne say they will work together to ensure service continuity for customers, employees, and vendors during the transition.

Glenn Johansson, CEO of Glesys, comments, “It expands our data centre footprint in the region and supports our ambition to grow cloud and colocation capabilities across the Nordic region.”

operations enables us to sharpen our focus on what we do best: providing low-carbon, high-performance colocation services.”

Dominic Ward, CEO of Verne, adds, “Our strategic decision to sell our managed private cloud

Verne, verneglobal.com

SPARKLE’S BLUEMED SUBMARINE CABLE LANDS IN CYPRUS

Sparkle and Cyta have announced the arrival of the BlueMed submarine cable at Cyta’s Yeroskipos landing station in Cyprus.

BlueMed is Sparkle’s new cable connecting Italy with several countries bordering the Mediterranean and up to Jordan. It is part of the Blue & Raman Submarine Cable Systems, built in partnership with Google and other operators, that stretch further into the Middle East up to Mumbai, India.

With four fibre pairs and an initial design capacity of more than 25Tbps per pair, the system delivers high-speed, low-latency, and scalable connectivity across Europe, the Middle East, and Africa.

With the branch to Yeroskipos station, Sparkle secures a key point of presence in Cyprus, whilst Cyta gains access to the BlueMed submarine cable system, enhancing connectivity between Cyprus, Greece, and other Mediterranean countries.

BlueMed has received funding from the European Commission under the Connecting Europe Facility programme.

Sparkle, tisparkle.com

VERTIV EXPANDS IMMERSION LIQUID COOLING PORTFOLIO

Vertiv has introduced the CoolCenter Immersion cooling system, expanding its liquid cooling portfolio to support AI and high-performance computing (HPC) environments. The system is now available in Europe, the Middle East, and Africa.

Immersion cooling submerges entire servers in a dielectric liquid, providing efficient and uniform heat removal across all components. This is particularly effective for systems where power densities and thermal loads exceed the limits of traditional air-cooling methods.

Vertiv has designed its CoolCenter Immersion product as a complete liquid-cooling architecture, enabling reliable heat removal for dense compute ranging from 25kW to 240kW per system.

Sam Bainborough, EMEA Vice President of Thermal Business at Vertiv, says, “Immersion cooling is playing an increasingly important role as AI and

HPC deployments push thermal limits far beyond what conventional systems can handle.”

The system is available in multiple configurations, including self-contained and multi-tank options. Each system includes an internal or external liquid tank, coolant distribution unit, temperature sensors, variable-speed pumps, and fluid piping.

Vertiv, vertiv.com

SALUTE LAUNCHES DEDICATED DTC COOLING SERVICE

Salute has announced what it describes as the data centre industry’s first dedicated service for directto-chip (DTC) liquid cooling operations, launched at NVIDIA GTC in Washington, D.C.

The service is aimed at supporting the growing number of data centres built for AI and highperformance computing workloads. Several data centre operators, including Applied Digital, Compass Datacenters, and SDC, have adopted Salute’s operational model for DTC liquid cooling across new and existing sites.

AI and HPC facilities operate at power densities considerably higher than those of traditional enterprise or cloud environments. In these facilities, heat must be managed directly at the chip level using liquid cooling technologies.

Erich Sanchack, Chief Executive Officer at Salute, says, “This first-of-its-kind DTC liquid cooling service is a major new milestone for our industry that solves complex operational challenges for [companies] making major investments in AI/HPC.”

Salute, salute.com

MAYFLEX INTRODUCES RETURNABLE FIBRE REEL SYSTEM

Mayflex has announced the launch of ReelCycle, a returnable reel system for cut-to-length fibre cables. Martin Eccleston, Customer Experience Manager, says, “ReelCycle is about making sustainability simple, practical, and rewarding for everyone.”

Made from 100% recycled composite materials, ReelCycle reels are also 100% returnable, helping to reduce waste and lower carbon footprints.

Customers pay a £10 deposit per reel at order. When the reel is empty, they scan the QR code to arrange a collection, with Mayflex covering the cost. Once returned, the account is credited with the deposit.

Martin continues, “Whether you’re working to ISO 14001 standards or simply want to do the right thing, ReelCycle helps you meet sustainability goals and win more projects.”

Once the composite reels have reached the end of their usable life, they will be returned to the manufacturer for recycling. The materials will then be repurposed into new products, supporting a fully circular and sustainable lifecycle.

Mayflex, mayflex.com

STL SHOWCASES MULTI-CORE FIBRE AT CONNECTED BRITAIN

STL has demonstrated its Unitube Single Jacket Indoor Optical Fibre Cable with four-core multi-core fibre (MCF) at Connected Britain 2025.

The technology places four cores within the same cladding diameter as standard single-mode fibre, maintaining a coating size of 250/200 micrometres. The cable has been designed specifically for indoor environments such as data centres, campus networks, and commercial buildings.

The cable is certified under the Construction Products Regulation EuroClass Cca-s2, d1, a1 standard, providing a high level of fire resistance for critical infrastructure. STL has also developed optical distribution units and connectivity products to complement the cable.

Key features include support for quantum key distribution to enable tamper-evident encryption, four times the throughput of legacy fibres, and higher fibre counts within a smaller footprint.

Dr Badri Gomatam, CTO at STL, says, “Our Unitube Single Jacket Indoor Optical Fibre Cable with MCF is engineered to meet the growing demands of high-capacity, secure, and future-ready networks.”

STL, stl.tech

CARRIER UPGRADES AQUAFORCE CHILLER WITH FREEBOOST

Carrier has announced the enhanced AquaForce 30XF chiller featuring its new FreeBoost technology at Data Centre World Paris 2025. The updated chiller is engineered for data centres requiring high energy efficiency, reduced noise levels, and fast capacity recovery under high ambient temperatures.

The updated AquaForce 30XF features FreeBoost technology to improve mechanical cooling performance. This results in seasonal efficiency of up to 25 under data centre conditions, enabling PUE levels below 1.2. The performances are both PEP ecopassport and Eurovent certified.

Aritz Calvo, Product Manager for Screw Chillers and Heat Pumps at Carrier Europe, notes, “The 30XF’s combination of higher energy efficiency and reduced power input lowers the total cost of ownership.”

The chiller provides capacity recovery within 60 to 120 seconds whilst maintaining performance up to 55°C and operating at 94dB(A). The design uses Carrier’s compressor technology, which reduces reliance on third-party components for maintenance.

Carrier, carrier.com

INFINIDAT EXPANDS INFINIBOX G4 STORAGE RANGE

Infinidat has announced the expansion of its InfiniBox G4 family of enterprise storage systems with a series of enhancements and a new, smaller form-factor model.

Eric Herzog, CMO at Infinidat, says, “We continue to expand and enhance our InfiniBox G4 family, enabling enterprise customers and service providers to store larger quantities of data more efficiently, have easier access to advanced storage capabilities, benefit from flexible capacity management, free up rack space and floorspace, and reduce energy consumption for a greener storage infrastructure at a better power cost-efficiency per terabyte of storage.”

The new InfiniBox SSA G4 F24 all-flash family features a 31% smaller physical configuration. The entry price point is also 29% lower than the original small form-factor of the InfiniBox SSA G4 and the system features a 45% reduction in power per petabyte.

Infinidat, infinidat.com

Is your Data centre ready for the growth of AI?

Partner with Schneider Electric for your AI-Ready Data Centres. Our solution covers grid to chip and chip to chiller infrastructure, monitoring and management software, and services for optimization.

Explore our end-to-end physical and digital AI-ready infrastructure scaled to your needs.

se.com/datacentres

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.