DCNN Autumn 2025

Page 1


DCNN is the total solution at the heart of the data

Infinidat’s continued RAG innovation increases AI accuracy

Bill Basinas, Senior Director of Product Marketing at Infinidat explains why Retrieval-Augmented Generation (RAG) innovation is so beneficial.

Read more on page 10

AND CABLING

CRITICAL DATA

QUANTUM COMPUTING

Integrated

Modular Data Centres: Strategic solutions for pressing issues!

DCs are facing growing challenges like rising power demands, labour shortages, rapid growth of AI workloads... Traditional approaches are often too slow, costly, and unsustainable where speed, efficiency, and scalability are required.

R&M addresses this with modular, ready-to-use solutions. These support key areas including servers and storage, computing rooms, meet-me rooms, and interconnects.

Scan to contact us.

Reichle

WELCOME TO THE AUTUMN ISSUE OF DCNN!

It’s my great privilege to welcome you to our latest issue of DCNN as the newest member of the editorial team. Coming from a background in teaching, I will certainly be a fresh face to many of you, so I look forward to getting out and about, meeting you all – our wonderful readers – and continuing my dive into this fascinating and increasingly critical industry. If you haven’t already seen it, we’ve recently launched our brand-new LinkedIn newsletter, DCNN Dispatch, where you can keep up to date with all the latest in the world of data centres and networks, every single week. To join in, you can easily subscribe via our LinkedIn page .

CONTACT US

EDITOR: SIMON ROWLEY

T: 01634 673163

E: simon@allthingsmedialtd.com

ASSISTANT EDITOR: JOE PECK

T: 01634 673163

E: joe@allthingsmedialtd.com

ADVERTISEMENT MANAGER: NEIL COSHAN

T: 01634 673163

E: neil@allthingsmedialtd.com

SALES DIRECTOR: KELLY BYNE

T: 01634 673163

E: kelly@allthingsmedialtd.com

I find myself getting involved in this industry as it enters a dynamic period of both extraordinary development and existential challenges. As you will see in this issue, we find ourselves navigating environmental issues, rapid technological advancements, ever-increasing demand for data centre services, and much more. It couldn’t be a more dramatic time to follow all the latest developments, and it’s a great pleasure to be bringing it all directly to you through our coverage.

I do hope you enjoy the magazine!

Joe

STUDIO: MARK WELLER

T: 01634 673163

E: mark@allthingsmedialtd.com

MANAGNG DIRECTOR: IAN KITCHENER

T: 01634 673163

E: ian@allthingsmedialtd.com

CEO: DAVID KITCHENER

T: 01634 673163

E: david@allthingsmedialtd.com

DCNN speaks to Bill Basinas of Infinidat to understand how the company’s RAG system can improve AI’s accuracy

14 Key Issue Paul Mongan of Davenham Switchgear explains how operators can optimise their data centres to safely and sustainably meet power demands

22 Jad Jebara of Hyperview explores how intelligent infrastructure management can cut water use, improve efficiency, and boost resilience

26 With AI data centres growing more complex, Craig Eadie of Straightline explains why sophisticated DCIM systems can only succeed if commissioning is watertight

33 Excel’s Elevate platform brings together fibre connectivity, racks, power, cooling, and DCIM to help operators scale capacity, cut waste, and meet sustainability goals

36 Dmitry Tsyplakov of HUBER+SUHNER explains how ribbon-based cabling systems are reshaping the physical layer for speed, density, and long-term efficiency

40 Silvia Boiardi of SITA and Phil Oultram of Alkira outline why the aviation industry’s ‘digital nervous system’ demands a new networking model built for global scale

44 Warren Aw of Epsilon highlights why agile, high-capacity connectivity is the critical ingredient for resilience in an era of relentless digital demand

18 Carsten Ludwig of R&M argues the case for an IMDC approach in balancing the diversifying demands of modern data centres

CRITICAL DATA QUANTUM COMPUTING

48 Sean Tilley of 11:11 Systems outlines why businesses must treat data protection and cyber resilience as inseparable priorities in the fight for operational continuity

51 Mary Hartwell of Syniti explains how governance transforms critical data from a liability into a trusted driver of business outcomes

54 Michael Vallas of Goldilock Secure argues that data resilience cannot stop at software

SPECIAL FEATURES

68 Show Preview

DTX London returns to ExCeL on 1–2 October 2025, marking its 20th anniversary with the theme, ‘Innovation with integrity; driving value, delivering purpose’

70 Show Preview

Reuters’ Energy LIVE returns to Houston on 9–10 December, uniting more than 3,000 executives to tackle power demand, grid modernisation, and the future of energy

57 Paul Holt of DigiCert explores why data centres must adopt post-quantum cryptography to safeguard trust, compliance, and resilience

60 Daniel Thorpe of JLL examines how quantum’s rise is creating a new class of real estate

62 Owen Thomas of Red Oak Consulting explains why quantum computing won’t replace classical systems, but will integrate with them instead

72 Show Preview

The team at DataCentres Ireland gives a preview of what’s on show this 19–20 November at the RDS, Dublin

74 Bonus Feature

Hans Obermillacher of Panduit looks at how aisle containment and active cooling systems are evolving to keep pace with the rise of AI-driven rack densities

78 Bonus Feature

Robin Earl of DEHN explains why lightning and surges are on par with cyberattacks in terms of their risk for data centres

AUTUMN 2025

Show Previews: DCE Previe, TOP Conference

ASSETHUB AND ITS TO SPEED UP FIBRE ROLLOUTS IN UK CITIES

AssetHUB has partnered with full fibre provider ITS to accelerate high-speed connectivity rollouts in UK cities.

The collaboration enables altnets, enterprises, local authorities, and carriers to purchase ITS’s dark fibre assets via AssetHUB’s secure marketplace, supporting smarter planning and infrastructure reuse. The platform aims to reduce roadworks, disruption, and unnecessary overbuild, while helping network builders cut costs and speed up deployments.

“Through collaboration and sharing of assets in big cities, fibre builders can avoid unnecessary dig costs and accelerate connectivity upgrades,” says Rob Leenderts, CEO of AssetHUB.

Kevin McNulty, Strategy Director at ITS, adds that making ITS’s infrastructure discoverable at the planning stage will “support faster rollouts, reduced disruption, and greater visibility of critical fibre routes.”

ITS will also use AssetHUB’s platform as a buyer to source infrastructure for its own expansion plans, reflecting a shared commitment to collaboration and efficient fibre deployment in dense urban areas.

AssetHUB, asset-hub.co.uk | ITS, itstechnologygroup.com

CRESA LAUNCHES DATA CENTRE CAPITAL MARKETS PLATFORM

Cresa has launched a new Data Centre Capital Markets and Advisory platform, led by Michael Morris, Sumner Putnam, and Matt Deutsch, formerly of Newmark.

The team has overseen data centre transactions in more than 50 global markets, expanding Cresa’s services into advisory, transaction structuring, and capital markets for major projects. Michael, appointed President of the platform, has worked on more than 1,000 data centre real estate deals and will be based in New York.

“Michael and his team are true data centre leaders,” comments Cresa CEO Tod Lickerman, adding that the expansion comes as data centre infrastructure growth represents “one of the most important technological challenges of our time.”

Summer Putnam joins as Managing Principal, bringing experience in site selection, lease negotiation, and colocation. The team will work with Knight Frank and support a range of clients across landlords, tenants, buyers, and sellers, alongside broader office sector work.

Cresa, cresa.com

AI BOOM TRIGGERS 160% DATA CENTRE POWER SURGE

A survey by CFP Energy warns that the AI boom is driving unsustainable growth in data centre energy use, with demand projected to rise 160% by 2030.

While most operators in Europe’s largest economies have adopted net zero strategies, many are failing to meet targets: 94% in the UK have a strategy but 22% are off-track, 90% in Germany with

30% falling short, and 86% in France with 14% missing goals.

The surge in AI power demand has already pushed Meta, Google, and Microsoft to secure nuclear energy deals, highlighting the scale of the challenge. With renewables lagging, reliance on fossil fuels remains high, undermining climate targets.

CFP Energy stresses that sustainable construction, advanced cooling, carbon offsetting, and collaboration with governments and utilities are vital. “AI must align with environmental imperatives,” says George Brown, CFP, warning that companies risk falling behind without stronger efficiency and renewable commitments.

CFP Energy, cfp.energy

PLANNING APPROVED FOR NEW UK DATA CENTRE

Outline planning approval has been granted for a new 5,000 m² data centre at 45 Maylands Avenue in Hemel Hempstead, Hertfordshire.

Designed by Scott Brownrigg for Northtree Investment Management, the three-storey facility will replace a two-storey warehouse and office building, providing digital infrastructure alongside office space, a substation, car parking, and servicing areas.

The design reflects the scale of neighbouring industrial buildings, using contemporary architecture and high-quality materials to enhance the Maylands Avenue frontage. Landscaping, seating areas, and cycle-friendly access aim to improve the public realm, while access routes will separate visitors from HGV and staff traffic.

Sustainability measures include a fabric-first approach, naturally ventilated offices, and the planting of native trees and shrubs to create a buffer and wildlife habitat. Scott Brownrigg says the project will densify land use while contributing positively to the streetscape and supporting Hemel Hempstead’s growing digital economy.

Scott Brownrigg, scottbrownrigg.com

EUDCA ANNOUNCES BOARD FOR 2025/27

The European Data Centre Association (EUDCA) has announced its new Board of Directors for 2025–27, following its annual general meeting.

Lex Coors, Digital Realty, has been unanimously re-elected as President and Policy Committee Chair, with Michael Winterson continuing as Secretary General, and Laurens van Reijen of LCL Data Centres as Treasurer. Four new Vice Presidents have been appointed: Bruce Owen, Equinix; Marie Chabanon, Data4; Isabelle Kemlin, Swedish Datacenter Industry Association; and Dick Theunissen, EdgeConneX.

Matt Pullen of CyrusOne remains the EUDCA’s representative to the Climate Neutral Data Center Pact, where he continues as Chair. Committees were also renewed, with Lex Coors reappointed to lead Policy, and Marie Chabanon appointed Chair of the Technical Committee.

Michael Winterson says the EUDCA will remain “the independent voice for Europe’s data centre community,” shaping regulation, promoting efficiency and sustainability, and supporting the growth of digital infrastructure and AI across Europe.

GNM

COMPLETES 400G

EUDCA, eudca.org

INFRASTRUCTURE UPGRADE IN SOFIA

GNM (Global Network Management) has upgraded its point of presence in Sofia, Bulgaria, deploying the Arista 7800R3 platform with native 400G capability.

The modernisation is part of GNM’s strategy to expand its optical backbone and meet growing interconnection needs across south-eastern Europe. The Sofia node now supports high-throughput transit traffic from the Balkans, Turkey, the Middle East, and the Caucasus, with two fully independent DWDM paths via Belgrade and Romania providing diversity, automated failover, and low-latency performance.

Integrated into GNM’s meshed backbone, the site connects to hubs including Frankfurt, Amsterdam, Vienna, Warsaw, and Stockholm, and offers 100G and 400G DWDM transport, GNM-IX access, IP Transit with policy control, and Layer 2 services.

Head of Development Alex Surkov says the upgrade has already cut latency to Frankfurt by 18% for one European operator, highlighting how infrastructure investment directly improves resilience and performance for clients.

GNM, gnm.net

THE SHIFT FROM STANDBY TO STRATEGIC ENERGY MANAGEMENT

Laura Maciosek, Director Key Accounts at Cat Electric Power Division, on why shifting backup assets into primary power is becoming essential as grid constraints intensify.

It’s safe to say the energy landscape is changing, with many prominent and significant changes having taken place in the last 24 months. The data-driven society we live in, from streaming devices and smart appliances to AI processing, continues to move demand for data centres in just one direction: up.

As data centres experience this growth, utility power is no longer a given. Today, there’s no guarantee the local electrical grid can meet these increased power needs. In fact, many utilities I’ve talked with say it’ll be three to five years (or longer) before they can bring the required amount of power online.

That puts data centre customers in a tricky position. How can they continue to expand and grow if there isn’t enough power and moving sites isn’t an option?

The answer includes rethinking power options, and that means considering the transition from using power assets for largely backup purposes to employing them as a primary power source.

That’s a big change from the status quo. If you’re in a similar position, you can read our advice on how to navigate the transition on our blog

Whether you’re ready to make the switch from standby to prime power at your data centre today – or simply weighing options for your next development or expansion – we’re here to help. We’ll work with you to find the right combination of assets and asset management software that fulfils your power requirements reliably and cost-effectively. Connect with one of our experts to get the process started.

Caterpillar, cat.com

INFINIDAT’S CONTINUED RAG INNOVATION INCREASES AI ACCURACY

AI has advanced rapidly into the mainstream of business and society over the past few years. These are exciting times, but AI adoption comes with many challenges, and enterprise storage plays a central role in making AI truly powerful. DCNN speaks to Bill Basinas, Senior Director of Product Marketing at Infinidat, to understand how the company’s Retrieval-Augmented Generation (RAG) innovation can improve AI’s accuracy.

DCNN: Nice to be speaking with you, Bill. What is the problem with AI currently?

BB: People are embracing and benefitting from AI-powered productivity, but they are also rapidly encountering its limitations. One big limitation is accuracy, because AI models are only ever as good as the data they are fed with. This is how these models ‘learn’ and assemble the resulting responses. There are countless stories circulating about users needing to correct mistakes detected in the responses provided by AI. People are submitting data for a query and getting nonsense back – often without realising it.

The issue is that these AI models sound incredibly convincing. And many of the people

relying on AI are doing so because they don’t know the answer themselves. To deliver on its potential, everyone must be able to trust the results provided and simply taking an answer at face value could have consequences.

DCNN: How are these mistakes occurring?

BB: These mistakes – known as ‘hallucinations’ –occur for three main reasons: Firstly, and most importantly, the technology likely didn’t have access to the exact information that the user needed to base a correct answer on. Secondly, the volume of information it had available to learn from was inadequate. Finally, there were likely biases present in the data used to train the system in the first place.

About Bill Basinas

Bill Basinas is Senior Director of Product Marketing at Infinidat and has been in the storage industry since 1994. He is experienced and knowledgeable in all aspects of primary and secondary storage platforms and associated data protection and cyber storage protection technologies. Having worked for EMC, HPE, Legato and Avamar (both acquired by EMC), and a number of other startup companies in technical/engineering, sales and marketing, manager, and individual contributor roles, he brings a unique understanding of technology and marketing to the storage industry.

When there are limitations in the data provided, AI will effectively go on a virtual scavenger hunt and put disparate pieces of data together without proper context. However, the answer always sounds totally credible. We need to remember the old saying, ‘GIGO’ (garbage in equals garbage out), and appreciate that AI models can consume vast amounts of good and/or questionable data. Getting erroneous information back from a user query is not acceptable in an AI-infused world, especially when the data is needed to make strategic business decisions. This could seriously affect how a company adopts this technology, and it could negatively affect their ‘customers’ – whether they are internal or external. The AI industry is quickly developing solutions to these problems, and we believe Infinidat’s AI Retrieval-Augmented Generation (RAG) solutions provide the foundational element to deliver far more accurate results.

DCNN: How much of a problem is this? How much do ‘hallucinations’ affect enterprises who want to utilise AI?

BB: Not everything you read on the internet is 100% true. When you do an internet search and are shown results, you have to determine

if the sources presented provide accurate or inaccurate data – and that evaluation lands on the user to determine if the source can be trusted or not. AI takes that to another level, because it is simultaneously learning from all those sources without knowing what is trusted and what is not. This means results will vary depending on the sources.

Analysts have estimated that in chatbots, as an example, hallucinations occur 27% of the time, with factual errors reportedly present in 46% of generated texts. That’s staggering, and it introduces a high risk into AI projects.

In certain vertical markets – such as healthcare, financial services, manufacturing, and education – AI hallucinations could be severely detrimental. We wouldn’t want a doctor to be fed information that may lead them to conclude an inaccurate diagnosis, or a financial advisor providing information that could lead to financial losses for users managing their money.

In manufacturing, supply chains could be disrupted when orders for parts are missed, wrong parts requested, or incorrect quantities ordered. Education has equivalent challenges, with students leveraging AI to support their studies. Getting erroneous data that they incorporate into their work or, worse still, plagiarise in their coursework are serious risks. The list of potential dire implications goes on.

DCNN: What does this have to do with Infinidat and enterprise storage?

BB: The key to making AI more accurate and relevant lies in the proprietary and validated, up-to-date information that an enterprise leverages – not only in learning models, but in the real-time data path of the query. The data that is learned via Large Language Models (LLMs) and Small Language Models (SLMs) exists in vector databases sitting in a storage system, and Infinidat’s solutions are well suited for this.

One challenge with vector databases is unless the data is constantly updated, the information will be static and/or only updated when more data is fed into it. This information has typically been the source of all the knowledge that can be accessed by the AI model, and it’s used to refine and validate responses to any query. That’s not enough.

DCNN: What has Infinidat done to make AI adoption easier?

BB: Infinidat is right at the centre of these solutions and innovative developments, living in the heart of the data centre and providing our customers with a highly scalable platform that delivers over 17PB of enterprise-class capacity in a single rack. We already hold tremendous amounts of critical data that can be leveraged in AI solutions. Our proven platform technology is powered by InfuzeOS, Infinidat’s patented Neural Cache technology with industry-leading low latency performance and scale.

We were quick to identify the need for a solution to the problem of AI hallucinations and Infinidat has developed a Retrieval-Augmented Generation (RAG) workflow deployment architecture. Leveraging a RAG / Agentic RAG solution helps overcome many of the limitations dependent on what is learned at a particular point in time that may exist in a vector database. An effectively implemented Agentic RAG solution will leverage many trusted data sources and will constantly update and refresh the data.

RAG / Agentic RAG architectures have been shown to dramatically improve the accuracy and relevancy of AI models, with up-to-date, private data from multiple company-trusted data sources. Infinidat’s solutions can host any number of workloads and thus data sets that can be leveraged. This includes unstructured and structured data, as well as vector databases themselves. More importantly, we have designed our solution to be easy to deploy and manage.

Many people believe that an AI infrastructure needs ‘specialised everything’ in order to work correctly, but this is just not the case. Industry analysts agree that for the average corporate environment, much of the existing infrastructure can be leveraged to meet the needs of most AI projects. I agree, and when the dust settles and they look at the expenditure on expensive infrastructure, the question of ROI will be difficult to quantify.

Let’s use the 80/20 rule. In 20% of use cases, it may be justified, and large GPU server farms with specific networking and storage architectures will be of benefit. However, when accuracy is

the most important factor, leveraging a well architected RAG architecture may negate any need to invest in a new and costly enterprise infrastructure. In today’s economic climate, this is an extremely beneficial consideration.

RAG solutions can be implemented with existing Infinidat platforms, which means that enterprises can optimise the output of their AI models without the requirement to purchase specialised equipment. Our RAG solution is also flexible enough to be used in a hybrid, multicloud environment, making it an incredibly powerful strategic asset for unlocking the business value of GenAI applications.

DCNN: What is powering Infinidat’s RAG Workflow Deployment Architecture?

BB: We used a Kubernetes cluster as the foundation for running a RAG pipeline, which results in making it portable, scalable, resource efficient, and highly available. We used Terraform to simplify the process of setting up a RAG system, enabling just one command to run the entire automation. Within 10 minutes, a fully functioning RAG system, hosted in the cloud, was ready to work with the data replicated from on-premises to InfuzeOS Cloud Edition (AWS and Azure).

In addition, the design architecture is extremely flexible. The whole structure of a RAG architecture, and the development of a RAG pipeline, is an inherently iterative process. By continuously refining a RAG pipeline with new data, enterprises can significantly enhance the accuracy and practicality of AI-model-driven insights, maximising the benefits of generative AI technology. We can also adjust the overall implementation to take advantage of elements that may already exist in the enterprises’ infrastructure, such as vector databases.

DCNN: Why is RAG so strategically important?

BB: RAG workflow has emerged as a key tool to provide continued refinement of data queries. The ability to continually update these environments adds to the power of generative AI models with an enterprise’s active private

data, helping to produce correctly informed responses to live queries, like ChatGPT.

What this means in practice is that RAG enables AI learning models – the LLM or SLM –to reference information and knowledge that extends beyond the data upon which it was trained. This approach not only customises general models with a business’s most updated information, but it also eliminates the need to continually re-train AI models, which is resource intensive.

DCNN: What does this innovation mean for enterprises?

BB: Organisations are accumulating vast repositories of data, and extracting actionable insights is a persistent challenge. The technology of LLMs has improved tremendously, but it remains resource intensive and requires substantial amounts of computational resources and energy (electricity and cooling) to run.

Most enterprises lack the budget or resources to deploy AI systems at the scale required to create LLMs and are adopting the more compact SLMs instead. Regardless of the model, training workloads should not be confused with RAG; they are different, and training is resource intensive. Hyperscalers have bulked up tremendously in this area to provide those types of training services, but once trained and operating within an enterprise, RAG is hugely important to produce accurate results.

This means that every AI project within an enterprise can adopt RAG as a standard part of its IT strategy.

DCNN: What are the three most important takeaways for enterprises?

BB: Enterprises should leverage RAG using their existing storage infrastructure. In most cases, they do not need to invest in a lot of specialised resources.

RAG gives an enterprise the flexibility to leverage and build out their environment to be agentic and provides the ability to leverage the cloud as a fast and convenient deployment solution.

Infinidat’s solution can encompass any number of InfiniBox platforms and enables extensibility to third-party storage solutions via file-based protocols such as NFS. Our RAG / Agentic RAG solution makes it easy to leverage your most important data to produce the best and most accurate results from your GenAI.

DCNN: Looking to the future, what final comments do you have?

BB: We believe that there is still a lack of clarity in the AI industry about the important role that enterprise storage infrastructure plays in the wider adoption of AI. Our vision is to raise awareness about this important topic and provide solutions that are practical and provide technological and cost saving benefits.

A RAG workflow can be easily created from existing open-source products and data already in an enterprise’s on-premises data center. Infinidat developers created an RAG workflow architecture outlining the process.

Infinidat RAG Workflow Architecture 1

Thanks to the approach taken by Infinidat, there is some good news for CIOs and for CTOs of data centre providers tasked with maximising efficiency. They will be happy to learn that it’s possible to utilise existing storage systems as the basis to optimise the output of AI models, without the need to purchase specialised equipment. Our solutions provide access to data in file and block.

existing open-source products and data already in an enterprise’s on-premises

Infinidat’s RAG workflow architecture runs on a Kubernetes cluster. Users who want to run RAG using data on-premises but without available GPU resources have a fast and convenient solution leveraging the cloud. Our approach uses a Kubernetes cluster as the foundation for running the RAG pipeline, enabling high availability, scalability, and resource efficiency. With AWS Terraform, we significantly simplify setting up a RAG system to just one command to run the entire automation. Meanwhile, the same core code running between InfiniBox on-premises and InfuzeOS™ Cloud Edition makes replication a breeze. Within 10 minutes, a fully functioning RAG system is ready to work with your data on InfuzeOS Cloud Edition.

Infinidat’s RAG workflow architecture runs on a Kubernetes cluster. Users who want to run RAG using data on-premises but without available GPU resources have a fast and convenient solution leveraging the cloud. Our approach uses a Kubernetes cluster as the foundation for running the RAG pipeline, enabling high availability, scalability, and resource efficiency. With AWS Terraform, we significantly simplify setting up a RAG system to just one command to run the entire automation. Meanwhile, the same core code running between InfiniBox on-premises and InfuzeOS™ Cloud Edition makes replication a breeze.

KEEPING THE DIGITAL WORLD CONNECTED

Paul Mongan, Engineering Manager at Davenham Switchgear, explains how operators can optimise their data centres to safely and sustainably meet power demands.

Data centres are a crucial yet often overlooked element of the modern world. Last year, they were classed as ‘critical national infrastructure’ in the UK – placed alongside water and energy – reflecting their importance in the digital age.

With the potential to contribute £44 billion to the UK economy by 2035, the sector’s value is clear. Yet, how we’ll safeguard the power supply for current and future facilities remains unresolved.

MANAGING THE CURRENT AND FUTURE DEMAND

Thanks to the GenAI boom, infrastructure demand has never been greater. Servers are becoming more powerful to meet increasingly complex queries, but this means greater energy needs, with new racks expected to use almost 40% more power by 2027. The National Grid predicts AI power consumption could surge 500% in the next decade.

Since July 2024, nearly £40 billion of new UK data centre investment has been announced. However, opposition over power and water

usage, plus limited grid availability, are stalling projects. West London’s grid, for example, is at capacity until 2035. Operators can’t control national infrastructure, but optimising power use improves efficiency, reliability, and costs.

SMARTER STRATEGIES

Cooling consumes up to 40% of a facility’s power. Identifying underused servers and using AI-powered optimisation can cut costs and energy needs significantly.

With 47% of data centres using ageing equipment, modern switchgear is also vital for efficiency, reliability, and adaptability.

As southern grids saturate, operators are turning to regions like the East Midlands, where grid access is freer, costs are lower, and tech ecosystems are growing.

With the right approach, the industry has a real opportunity not just to meet demand, but to do so in a way that’s sustainable, scalable, and secure.

Davenham Switchgear, davenham.com

DCIM SUPPORTED BY

BAUDOUIN POWERS THE FUTURE

From hyperscale growth to sustainability targets, Baudouin presents its high-capacity, HVO-ready backup for data centres.

The data centre industry is entering a new era of demand. Hyperscale and colocation facilities are expanding at unprecedented rates, driven by cloud adoption and the rise of AI workloads. With grid constraints and tightening sustainability regulations, the role of backup power has never been more critical. Operators need solutions that combine uncompromising reliability with lower environmental impact and rapid deployment capabilities.

Baudouin has built its reputation in power generation by delivering high-performance solutions for the most demanding standby applications. Today, the company designs and manufactures complete gensets tailored to the stringent technical and operational requirements of data centres, with a focus on scalability, efficiency, and sustainability.

At the core of this offering is the 20M55 genset, delivering up to 5250 kVA of dependable standby power for hyperscale and colocation projects. Meeting the ISO 8528-G3 load performance and pre-approved by the Uptime Institute, it is ready for Tier certification and integrates seamlessly into complex backup architectures.

In an outage scenario, the 20M55 delivers a fast, 10-second start, ensuring critical systems remain online. Its dual starting motors provide built-in redundancy for maximum reliability, while eight high-efficiency turbochargers deliver stable power output even under heavy block loads. Optimised fuel consumption keeps operating costs low without compromising performance.

HVO: DECARBONISING STANDBY POWER WITHOUT COMPROMISE

Decarbonisation is a top priority across the sector and fuel choice is one of the fastest ways to make meaningful progress. Baudouin’s entire genset range is fully compatible with HVO100 Hydrotreated Vegetable Oil, a renewable paraffinic fuel made from waste oils and fats. HVO can reduce lifecycle CO 2 emissions by up to 90% compared to diesel without impacting engine performance, load acceptance, or start-up response times.

Because HVO is a drop-in replacement for EN 15940 compliant diesel, data centres can make the switch immediately using existing storage and distribution infrastructure. For backup systems that may only run a few hours each year but must start instantly in an emergency, the environmental gains are significant without any compromise on availability or compliance. Baudouin’s HVO-ready gensets are pre-certified to ensure seamless alignment with Tier standards and sustainability reporting.

SCALING UP TO HYPERSCALE

Baudouin’s portfolio now includes the 20M61 genset, delivering up to 6 MVA, making it one of the highest single set outputs available. Its compact footprint allows fewer units to be installed in large-scale projects, reducing space requirements, installation complexity, and total cost of ownership. Combined with one of the fastest lead times in the industry (under six months), it provides a decisive advantage for operators facing aggressive project timelines. Meet Baudouin at Data Centre Dynamics, taking place 16–17 September at the Business Design Centre in London.

Baudouin, baudouin.com

COMBINING IMDC WITH DCIM: UNIFIED VISIBILITY AND CONTROL

Carsten Ludwig, Market Manager Data Center at R&M, argues the case for an IMDC approach in balancing the diversifying demands of modern data centres.

As digital transformation redefines business processes, data centre operators are facing pressure to balance capacity, sustainability, and speed. An Integrated Modular Data Centre (IMDC) approach – underpinned by granular DCIM and digital twins – enables unparalleled visibility, rapid deployment, and agile scalability. However, this requires smart planning across multiple domains with IT and OT (Operational Technology) working in sync.

A fully integrated, modular approach to data centre design, including modelling, simulations, AI-driven intelligence, and digital twins, offers unprecedented flexibility and vast scope for ongoing optimisation. DCIM platforms enhance this with granular, real-time insight, mapping the status of every single port, cable, PDU, and connection in a unified centre framework. This combination enables detailed capacity planning, predictive maintenance, and rapid expansion. Traditionally, data centre architectures were designed for steadily growing demand. However,

this approach can’t keep up with the rapid shifts driven by AI, 5G, edge computing, and immersive applications such as AR/VR. To keep pace and adapt swiftly and sustainably, data centres require integrated, intelligent infrastructure.

IMDCs support this by providing a flexible, prefabricated framework designed to accommodate nodes ranging from high-density racks to entire micro-modules. In this way, edge, colocation, or hyperscale facilities can deploy at scale and be upgraded rapidly, thanks to the use of standardised components. Digital twin and simulation technology are critical to this.

Coupling the modular integrated approach with 3D modelling, accurate asset documentation, and scenario-based simulation helps operators evaluate power, cooling, connectivity, and spatial constraints before committing to hardware decisions. These capabilities make precise planning possible, drive out inefficiencies, and support fast iterations.

By simulating modular frameworks before they are deployed, the IMDC approach allows operators to validate designs against realworld metrics. Space constraints, airflow patterns, power distribution, and more are considered before concrete is poured or racks installed. This holistic digitalisation helps data centres understand and manage the different interactions; it enables future-proof architectures that evolve with rising demands, whether AI workloads or edge applications. This integrated approach can also mitigate supply chain delays. Preconfigured, delivery-ready modules can bypass material shortages and labour constraints while maintaining consistent standards and compatibility.

GRANULAR INSIGHT DOWN TO PORT LEVEL

Granular DCIM lies at the heart of successful integrated modular design, creating an inventory of every port, cable, PDU, and rack; monitoring this in real time also helps optimise operations and prepare for future expansions.

Inline and rack-side cooling can adjust dynamically to actual loads, avoiding inefficiencies and reducing wear. AI-enhanced digital twins run what-if analyses on capacity, failure scenarios, and thermal dynamics, proactively alerting operators before issues arise.

Once live, the integrated platform continues to deliver value, feeding live data (occupancy, bandwidths, energy usage) back into predictive models. This sets the stage for AI-based optimisation, where management systems learn over time and deliver actionable insights and advice. Implementation of status overviews driven across KPIs is the first step for adding AI-based analysis and advice management at a later stage.

SEAMLESS SCALABILITY WITHOUT SILOS

Modularity entails more than joining up a set of prefabricated boxes; it’s a philosophy of unified management. IMDCs break down silos by combining racks, connectivity, power, cooling, and DCIM into one, modularised system – a holistic infrastructure solution built to handle growing complexity. Each module is a self-contained node, ensuring scaling does not require the complex and costly migration of legacy hardware or rearchitecting control systems.

This visibility spans every layer of the infrastructure and allows operators to report on all relevant KPIs in real time.

In an integrated system, DCIM offers a unified dashboard of live and forecasted metrics across all domains, such as power, thermal,

connectivity, spatial capacity, PDU load, and cable saturation. The control centre becomes intelligent and proactive instead of reactive.

SUSTAINABILITY AND EFFICIENCY GAINS

Real-time, granular control also empowers sustainability. Introducing high-density computing typically increases pressure on cooling and power systems. By matching cooling and power to actual system load and dynamically adjusting airflow, IMDCs avoid over-provisioning while reducing energy and carbon footprints.

Prefabricated, standardised modules also reduce onsite waste and resource usage. Moreover, dense rack layouts minimise real estate consumption, supporting more compute per square metre without a proportional increase in structural, HVAC, or power capacity.

RAPID DEPLOYMENT AND EDGE-READY AGILITY

One of IMDC’s core strengths is speed: modules are pre-engineered, tested, and shipped ready-to-run, slashing deployment timelines by up to 75% compared with custom builds. This agility perfectly suits edge rollouts near end users (5G, IoT, micro cloud), disaster recovery deployments, and hyperscale environments where speed and standardisation matter. Digital twins and DCIM ensure that each edge node, regardless of size, is fully integrated into the control plane.

CONCLUSION

IMDCs, combined with next gen DCIM and digital twins, provide operators with:

• Full-stack visibility (port-to-rooms)

• Rigorous, simulation-based planning

• Plug-and-play deployment and scaling

• Energy and cost-efficient sustainability

• AI-ready foundations for continuous

• optimisation

There’s even greater potential ahead. The foundational elements (comprehensive telemetry, 3D models, integration of OT/IT metrics) set the stage for several game-changing advantages. AI-driven optimisation allows automatic adjustment of infrastructure in real time, predictive maintenance eliminates unplanned outages, and integrating historical data with live trends supports continuous capacity and cost forecasting.

By adopting this approach, operators can design, deploy, and scale data centres with precision, confidence, and agility – fully aligned with tomorrow’s demands from AI, edge computing, sustainability goals, and enterprise digitalisation.

R&M, rdm.com

5250 kVA POWER BUILT TO

RELIABLE BACKUP POWER SOLUTIONS FOR DATA CENTRES

The 20M55 Generator Set boasts an industry-leading output of 5250 kVA, making it one of the highest-rated generator sets available worldwide. Engineered for optimal performance in demanding data centre environments.

PRE-APPROVED UPTIME INSTITUTE RATINGS

ISO 8523-5 LOAD ACCEPTANCE PERFORMANCE | BEST LEAD TIME

DATA CENTRES AND DROUGHT: LIQUID COOLING IS A RECIPE FOR DISASTER

Jad Jebara, CEO and co-founder of Hyperview, explores how intelligent infrastructure management can cut water use, improve efficiency, and boost resilience.

The UK is facing a growing water crisis. Despite its reputation for dreary weather, parts of the country are experiencing drought. The Environment Agency has warned that England could face a daily shortfall of five billion litres of water by 2055, which is more than a third of what the country currently uses. Amidst this growing concern, the role of the UK’s data centres’ worsening water stress is coming under mounting scrutiny.

According to research conducted by The Times, Britain’s data centres consume nearly 10 billion litres of water annually – equivalent to the yearly usage of around 190,000 people. This figure only represents a snapshot, since most water companies don’t track how many data centres they supply, let alone how much water is being used by each. The report showed that industry watchdogs are warning that water consumption could rise sharply in

the coming years, with new data centre builds potentially requiring as much water as 500,000 people combined.

As generative AI models drive huge demand for computing power, and with more facilities planned for already water-stressed regions such as Southeast England, the current model of high-consumption, reactive cooling is becoming unsustainable. The issue is not just the volume of water used; it’s the lack of oversight, data, and accountability.

COOLING IS THE CULPRIT

A significant amount of data centre water consumption is linked to cooling infrastructure. To prevent servers from overheating, many facilities use water or liquid-based cooling systems which can require large, daily water volumes to maintain safe operational temperatures.

In hot spells or peak load conditions, this cooling often becomes reactive: large volumes of water are pumped through systems in response to spikes in the temperature, with minimal optimisation or forecasting. The result is an inefficient model that pours water onto the problem rather than addressing the underlying environmental imbalances or infrastructure gaps.

This lack of visibility and predictability no longer cuts it – neither environmentally nor operationally. It’s increasingly misaligned with national climate targets and sustainability expectations.

THE CASE FOR INTELLIGENT MANAGEMENT

Addressing this challenge doesn’t mean a full abandonment of liquid cooling altogether, but it does entail a rethink of how infrastructure is monitored, managed, and optimised.

One of the most powerful levers available today is Data Centre Infrastructure Management (DCIM). By running systems that offer real-time insight into environmental conditions, power usage, airflow, and equipment performance, operators can shift from reacting when it’s already urgent to proactive cooling strategies, drastically reducing the need for emergency water-based interventions.

For example, DCIM platforms can detect hotspots or thermal imbalances before they spiral into critical events. Rather than flooding the system with chilled water, operators can redistribute workloads, rebalance power, or adjust airflows to stabilise temperatures – all without consuming additional water.

This kind of pre-emptive decision-making is not only more sustainable, but also far more efficient. The right energy management tools can lower operational expenses by as much as 30%, as well as reducing the likelihood of overcooling (where facilities run cooling systems harder than necessary “just in case”).

REDUCING WASTE, AVOIDING DOWNTIME

Beyond reducing water consumption, DCIM tools also allow data centres to recover stranded power and cooling capacity. Much like an underutilised piece of office space, parts of a data centre may be running below their potential simply because operators lack visibility. By uncovering these inefficiencies, facilities can offset or even completely avoid costly expansions while maintaining reliability.

Cooling-related issues are among the most common causes of unplanned data centre downtime, and a significant portion of these incidents – as many as 70% – are the result of human error. Intelligent automation also reduces this risk by removing manual processes, standardising responses, and alerting teams to potential issues before they require intervention.

In addition, compliance with industry thermal guidelines, such as those set by CEN and ISO standards in Europe, can be enforced more effectively through automation. Rack-level temperature thresholds can be continuously monitored, with alerts triggered the moment conditions begin to drift beyond acceptable operating ranges.

INDUSTRY RESPONSIBILITY AND THE PATH FORWARD

The environmental cost of data centres is no longer limited to electricity and carbon emissions. Water must now be considered a critical resource – one that is finite, under strain, and tightly linked to regional stability.

Yet, as The Times’ investigation shows, the UK has virtually no system in place for monitoring or regulating how much water is being consumed by its 450–500 data centres. With demand set to grow, and water supplies already stretched thin, the lack of data and accountability poses a great risk not just to

the environment, but to public confidence in the tech sector’s ability to manage its impact responsibly.

A new approach is needed: one grounded in transparency, intelligent resource management, and a clear understanding that sustainability is now central to operational resilience. There must be better systems in place to track how much water is being used by data centres across the UK – not just to inform regulation, but to help operators take responsibility for their consumption.

However, the goal shouldn’t be to just monitor the high usage; it should aim to eliminate the need for it. With the right infrastructure in place, the UK’s limited water supply should no longer be used to support inefficient, reactive cooling.

By investing in real-time monitoring, predictive automation, and smarter decision-making, data centre operators can pre-empt thermal issues, optimise energy use, and avoid emergency cooling interventions altogether. Water should no longer be treated as an endless input, but as a resource that is only put into use when absolutely necessary.

While data centres are vital to our connected lives, their importance doesn’t excuse inefficiency. When water is increasingly scarce, intelligent management must be the standard, not the exception.

Hyperview, hyperviewhq.com

Powering data centers sustainably in an AI world

Artificial Intelligence (AI) - Friend or Foe of Sustainable Data Centers?

Data centers are getting bigger, denser, and more power-hungry than ever. Artificial intelligence’s rapid emergence and growth (AI) only accelerates this process. However, AI could also be an enormously powerful tool to improve the energy efficiency of data centers, enabling them to operate far more sustainably than they do today. This creates a kind of AI energy infrastructure paradox, posing the question: Is AI a friend or foe of data centers’ sustainability?

In this Technical Brief, Hitachi Energy explores:

• The factors that are driving the rapid growth in data center energy demand,

• Steps taken to mitigate fast-growing power consumption trends, and

• The role that AI could play in the future evolution of both data center management and the clean energy transition.

DON’T LET POOR COMMISSIONING UNDERMINE YOUR DCIM STRATEGY

With AI data centres growing more complex, Craig Eadie, Managing Director at Straightline, explains why sophisticated DCIM systems can only succeed if commissioning is watertight.

A new generation of data centre designs aiming to support the AI boom requires more sophisticated orchestration and monitoring tools than ever before. However, a poorly executed commissioning process can set your Data Centre Infrastructure Management (DCIM) up for failure before you even spin up your first AI workload.

NEW OPPORTUNITIES AND NEW INFRASTRUCTURE CHALLENGES

From customer service to coding, the AI boom is sweeping through virtually every industry in every market, promising revolutionary new capabilities and efficiency gains. GenAI

attracted more than $56 billion (£41.9 billion) in funding last year, almost doubling figures from 2023, when GenAI companies attracted approximately $29 billion (£21.7 billion). In particular, it was the companies in the GenAI infrastructure layer that saw some of the most robust growth in 2024. Huge funding rounds for companies like Databricks and GPU cloud provider CoreWeave drove a multi-billion dollar wave of capital investment, making AI infrastructure providers the clear winners in the GenAI space so far.

Delivering the infrastructure necessary to capitalise on this opportunity, however, is easier said than done. While other sectors focus on solving the challenges of generating value through new efficiencies, building large language

models (LLMs) to solve business pain points, and creating the necessary cultural buy-in to hit the ground running in an AI-first environment, the data centre industry is focused on other, more technical hurdles.

AI infrastructure demand is growing at an unprecedented rate. Data from Goldman Sachs paints a picture of soaring power demand outpacing existing grid capacity. Demand from data centres is likely to increase 50% by 2027 and by as much as 165% by 2030. This rising hunger for capacity is tied to the fact that AI workloads work very differently compared to traditional colocation and cloud activities.

AI data centres rely on GPUs and TPUs which consume more power and process tasks in different ways to traditional CPU-focused workloads that support more conventional computing. GPUs can run thousands of tasks simultaneously, which makes them ideal for high-intensity workloads like training a new LLM. The result, however, is that server racks in AI data centres need around 50 times the power of traditional digital infrastructure. More computations mean more electricity and therefore more heat.

To support this new generation of increasingly demanding data centre infrastructure, data centre builders are developing specialised electrical systems and embracing hyper-efficient liquid cooling technologies that increase the cost and complexity of their facilities. Complexity, heat, intense workload and the designs behind new generations of chips also mean the hardware in AI data centres runs under more stressful conditions, resulting in a shorter lifespan.

The hyperscale cloud industry is built around three-year upgrade cycles – something that’s factored into construction timelines of multiphase campus projects. AI hardware, by comparison, starts fraying around the edges

much earlier. Data gathered from Meta’s new Llama 3 405B model revealed that the estimated annual failure rate of the latest generation of GPUs sits at around 9% and may reach about 27% after three years of use, with the failure rate increasing as time goes on. Brighter flames burn half as long, and a largescale AI facility burns very bright indeed. What this means is that, even from day one, AI server racks are worryingly prone to throttling down to prevent dangerous overheating or just going offline altogether.

MEETING THE CHALLENGES OF AI WITH SOPHISTICATED DCIM SOLUTIONS

The rise of AI is doing more than ramping up data centre workloads; it’s changing how those data centres are built. AI data centres demand more sophisticated infrastructure, real-time processing capabilities, and advanced storage solutions to keep pace with the changing demands of the technology.

Modern DCIM solutions are also evolving to meet this challenge, leveraging AI to deliver more precise, impactful insights in real time. Deployed correctly, DCIM software can monitor and orchestrate the complex, high-stakes operations of an AI data centre — ensuring power and cooling, workload distribution, and other auxiliary systems work to maximise performance and minimise the hardware pain points that all too often afflict this new generation of infrastructure.

The challenge, however, is that DCIM solutions can’t oversee the running of dozens of interconnected data centre systems if those systems aren’t running as intended. This is where the risk of poor commissioning rears its head.

Modern data centres are built to exacting standards that cover everything from power and cooling to fire suppression systems, security, and fibre optic connections. A fault anywhere in the system can throw the whole facility into jeopardy.

Commissioning isn’t a simple box-ticking exercise. A rigorous, independent commissioning process can make the difference between a working facility and a project plagued by maintenance and performance issues for years down the line. Given the skyrocketing complexity – and, subsequently, the price – of AI data centres, the margins for error are razor thin.

Poor documentation can leave gaps in asset registers and inspection records. Design and construction delivery misalignment can throw off maintenance and build-out schedules. Service clashes between mechanical, electrical, and fire protection systems can interfere with one another if not coordinated from the

beginning, undermining the ability for a DCIM system to effectively oversee the facility.

This is why independent commissioning is so important. It’s a chance for data centre builders to interrogate the increasingly complex systems that not only keep AI facilities running, but ensure their eye-wateringly expensive hardware doesn’t suffer catastrophic failures down the road. Poor commissioning means errors get missed that can potentially lead to massive, cascading failures. AI data centres are more complex and interdependent than any generation of digital infrastructure before them, so meticulous inspection, testing, and evaluation is key.

In a world where facilities need to be designed to new specifications, built to tighter deadlines, and filled with increasingly complex, costly equipment, the stakes have never been higher. In the AI age, commissioning has never been a more important piece of the puzzle.

Environmental monitoring experts and the AKCP partner for the UK & Eire.

How hot is your Server Room?

Contact us for a FREE site survey or online demo to learn more about our industry leading environmental monitoring solutions with Ethernet and WiFi connectivity, over 20 sensor options for temperature, humidity, water leakage, airflow, AC and DC power, a 5 year warranty and automated email and SMS text alerts.

projects@serverroomenvironments.co.uk

Keep IT cool in the era of AI

EcoStruxure IT Design CFD by Schneider Electric helps you design efficient, optimally-cooled data centers

Optimising cooling and energy consumption requires an understanding of airflow patterns in the data center whitespace, which can only be predicted by and visualised with the science of computational fluid dynamics (CFD).

Now, for the first time, the technology Schneider Electric uses to design robust and efficient data centers is available to everyone.

• Physics-based predictive analyses

• Follows ASHRAE guidelines for data center modeling

• Designed for any skill level, from salespeople to consulting engineers

• Browser-based technology, with no special hardware requirements

• High-accuracy analyses delivered in seconds or minutes, not hours

• Supported by 20+ years of research and dozens of patents and technical publications

Equipment Models –

Easily choose from a range of data center equipment models from racks, to coolers, to floor tiles.

CFD Analysis –

The fastest full-physics solver in the industry delivering results in seconds or minutes, not hours.

Cooling Check –

Visualisation Planes and Streamlines –Visualise airflow patterns, temperatures and more.

Reference Designs –

Cooling Analysis Report –

Generate a comprehensive report of your data center with one click.

At-a-glance performance of all IT racks and coolers.

Quickly start your design from pre-built templates.

IT Airflow Effectiveness and Cooler Airflow Efficiency –Industry-leading metrics guide you to optimise airflow.

Room and Equipment

Attributes – Intuitive settings for key room and equipment properties.

MAYFLEX PARTNERS WITH SCHLEIFENBAUER TO STRENGTHEN ELEVATE BRAND

Mayflex, a leading distributor of converged IP solutions, has signed a new distribution agreement with Netherlands-based Schleifenbauer, a manufacturer of intelligent power distribution units (PDUs) and energy management solutions. The partnership enhances Mayflex’s Elevate brand, offering advanced, scalable technologies for the

high-performance computing and data centre markets.

Schleifenbauer’s European-made PDUs deliver consistent quality, rapid lead times, and compliance with EU standards. Customers benefit from flexible production without minimum order quantities, free energy management software, and hot-swappable control modules for maximum uptime.

Simon Jacobs, Product Manager at Mayflex, says, “Schleifenbauer’s intelligent power solutions are a perfect fit for our growing data centre portfolio.”

Stuart Edmonds, UK & Ireland Sales Manager at Schleifenbauer, adds, “Mayflex’s reputation for technical excellence and customer service aligns perfectly with our values, enabling us to meet the evolving needs of the HPC and data centre markets.”

Mayflex, mayflex.com

JOULE, CATERPILLAR, AND WHEELER PARTNER TO DELIVER GIGAWATT-SCALE POWER

Joule Capital Partners, Caterpillar, and Wheeler Machinery have announced an agreement to power Joule’s High Performance Compute Data Centre Campus in Utah. Designed to be the largest single campus in the US state, the project will provide four gigawatts of capacity to the Intermountain West.

The initiative combines Caterpillar’s latest G3520K generator sets with integrated cooling, heat recovery, and liquid cooling systems. It also includes 1.1 gigawatt hours of grid-forming battery storage, aiming to ensure resilient, on-demand energy for next-generation high-density servers.

David Gray, President of Joule Capital Partners, says, “By pairing Caterpillar’s advanced energy systems with Wheeler’s expertise, we can bring gigawatt-scale capacity to market faster and more efficiently than ever before.”

Melissa Busen, Senior Vice President of Electric Power at Caterpillar, adds, “This project shows how Caterpillar can deliver reliable, fast power for AI-ready infrastructure.”

Caterpillar, caterpillar.com

NETWORKING AND CABLING SUPPORTED BY

FUTURE FASTER WITH ELEVATE

Excel’s Elevate platform brings together fibre connectivity, racks, power, cooling, and DCIM to help operators scale capacity, cut waste, and meet sustainability goals.

Data centres are under pressure to deliver more capacity, efficiency, and sustainability. Elevate – Future Faster, an Excel solution, was created to help operators meet these challenges with confidence.

Designed for the white and grey space of mission-critical facilities, Elevate delivers a comprehensive portfolio of passive infrastructure for ultra-dense, scalable deployments. At its core, Elevate provides high-density fibre connectivity (MPO, VSFF), racks and containment, intelligent power and DCIM, advanced cooling, and fibre ducting for scalable growth.

“Elevate is about delivering speed, precision, and innovation, but also trust. Customers know they’re investing in solutions that are future-ready and fully supported,” says Andrew Percival, Managing Director.

POWER AND INTELLIGENCE: FROM INSIGHT TO ACTION

Visibility and control are the lifeblood of modern operations. Without them, managing density and sustainability targets becomes impossible. Elevate addresses this by integrating intelligent

iPDUs from partners Schleifenbauer and nVent with Sunbird’s DCIM platform, creating a digital twin of the data centre.

This combination delivers rack-level insight into power and environmental conditions, enabling operators to optimise capacity, reduce waste, and meet ESG reporting requirements with confidence.

Through Elevate’s RePower Trade-In for Tomorrow programme, customers can receive up to £35 per legacy PDU when upgrading to new, energy-efficient models. Retired units are removed and responsibly recycled, cutting e-waste and lowering carbon emissions. By pairing modern power hardware with advanced DCIM analytics, operators gain richer data, smarter reporting, and a clear path towards sustainable growth.

“DCIM is no longer a ‘nice to have’ – it’s mission critical. By linking intelligent power hardware with analytics and automation, Elevate helps customers manage density, reduce waste, and meet regulatory requirements head-on,” argues Simon Jacobs, Product Manager.

RACKS AND CONTAINMENT: BUILDING BLOCKS OF SCALABILITY

The physical infrastructure of the data centre sets the stage for performance. Elevate’s Data Centre Rack (DCR) Series is built for ultra-high-density and HPC environments, supporting static loads up to 2000kg.

With vented doors (up to 80% perforation), airflow baffles that move with the rails, and integrated cable management, the DCR directs cold air where it’s needed while protecting airflow integrity. When combined with hot and cold aisle containment, thermal performance improves across the white and grey space, cutting bypass airflow and lowering cooling costs. Deployment flexibility is enhanced by quick-mount PDU trays, extended roof options, and 64A Commando plug clearance.

“The response from the market has been fantastic. Customers tell us that Elevate feels different – it’s not just another vendor; it’s a true partner, helping them deliver more, faster,” notes Ross McLetchie, UK Sales Director.

Collaboration at the core

Elevate’s strength lies in its ecosystem of trusted partners, ensuring customers benefit from joined-up innovation. These include:

• Sunbird – actionable DCIM intelligence

• nVent – advanced cooling and power distribution

• Schleifenbauer – intelligent power distribution

• Senko – precision fibre connectivity

• Axis, Avigilon, and Suprema – security from perimeter to rack

ENGAGING WITH THE MARKET

Elevate connects with customers not just through technology, but through industry engagement. This October it will host its flagship end-user event:

‘Pole Position: Data Centres in the Fast Lane’

Location: F1 Arcade, St Paul’s, London

Date: Thursday, 9 October 2025, 12.00–6.00pm

The event combines partner-led presentations on cooling, DCIM, and connectivity with F1 simulator racing, networking, and prizes.

Register your interest here.

Elevate will also showcase its portfolio at DCD Connect (London), DCW Madrid, and DCW Paris.

With integrated DCIM and power, advanced racks and containment, and a partner ecosystem spanning cooling, connectivity, and security, Elevate provides the platform for performance that tomorrow’s data centres demand.

Discover Elevate - Future Faster.

Elevate, elevate.excel-networking.com

ACCELERATING FIBRE INFRASTRUCTURE WITH RIBBON-BASED CABLING SYSTEMS

Dmitry Tsyplakov, Data Centre Solution Manager at HUBER+SUHNER, explains how ribbon-based cabling systems are reshaping the physical layer for speed, density, and long-term efficiency.

In an era where data centre infrastructures are scaling to accommodate soaring bandwidth demands and increasingly dense compute environments, the foundation of any successful operation lies in the efficiency, scalability, and manageability of its physical layer. However, many existing fibre infrastructure approaches are beginning to show their age as they struggle to keep pace with the speed demanded by today’s data centre environments. Fibre counts, built around this existing infrastructure, are continuing to climb in support of a range of next-generation technologies, creating significant deployment and operational challenges for technicians.

Thankfully, a modern solution in ribbon-based fibre optic cabling systems has emerged to cope with this increased demand. Drastically improving installation efficiency and minimising operational complexities, these innovative cabling systems offer a future-proof path for evolving network topologies by delivering the speed and operational simplicity that modern data centres require.

MEETING THE DEMAND FOR SPEED AND SIMPLICITY

The benefits of installing ribbon cables are significant in comparison to traditional methods. Earlier cabling examples were either loosely

or individually bundled in bulky distribution systems, bringing inefficiencies in space and airflow management and extending deployment times. These constraints become increasingly problematic in hyperscale environments and high-performance computing (HPC) facilities where agility and uptime are paramount.

Ribbon-based solutions flip this model. With features such as mass-fusion splicing, dry cable designs (eliminating the need for messy gel), and cassette-based distribution, they offer a cleaner, faster, and more scalable alternative. By simplifying splicing and routing through pre-sorted fibre bundles and integrated modules, ribbon systems minimise installation errors and optimise both capital and operational expenditure.

Ribbon-based solutions can offer a comprehensive architecture that streamlines fibre management from the building entry point to the white space of any data centre. With fewer touchpoints and integrated design principles, operators can establish a cohesive infrastructure that reduces complexity and enhances long-term operational flexibility.

However, the sheer amount of data traffic processed through these data centres requires infrastructure that can be deployed almost instantaneously with minimal disruption to ongoing operations. The pressure to expand capacity without sacrificing uptime hinges heavily on the shoulders of decision-makers. Fortunately, directly reducing installation time is another box that’s ticked by integrating ribbon-based systems – but how is this actually achieved?

INSIDE RIBBON ARCHITECTURES

The foundation of this reduced installation time is formed through four crucial elements:

The cables themselves are specifically engineered for mass-fusion splicing – a technique that joins multiple optical fibres simultaneously, using a specialised fusion splicer which enables splicing speeds up to 60% faster than traditional single-fibre methods. The use of pre-sorted fibre bundles and optimised stripping characteristics streamlines processes and reduces labour workloads, allowing for quicker deployments and easier maintenance.

High-capacity Optical Distribution Frames (ODFs) support the cables by managing tens of thousands of fibres within a compact footprint. Front-access layout and modular cassette designs simplify patch cord management while improving flexibility for future network expansions or reconfigurations. These enhancements reduce operational disruption during changes or upgrades specifically.

For top-of-rack, backbone, and cross-connect scenarios, modular high-density connectivity systems enable flexible scaling and efficient space utilisation. These platforms can support several hundred fibres per rack unit and are designed to adapt to growing bandwidth needs without compromising performance.

Integration of mass-fusion splicing modules into these systems helps to minimise installation times while maintaining high performance and reliability.

Completing the ecosystem are ribbon splice boxes. Offering structured splicing environments within standard rack configurations, they are capable of housing over a thousand fibres in just a few rack units, enabling a smooth transition from outdoor to indoor cabling. Features such as drawer-based layouts and centralised splice management make them ideal for complex environments requiring high fibre counts and organised cable handling.

Each of these properties work in tandem with one another to reach the end goal of not only an enhanced, future-proof data centre, but a cost-effective one in the process.

LOWERING TOTAL COST OF OWNERSHIP

By reducing deployment time and simplifying cable management, ribbon systems dramatically optimise total cost of ownership (TCO). The integration of mass-fusion splicing and dry cable technology reduces the number of technicians required on site, speeds up

installations, and minimises errors. With higher fibre density and reduced rack space consumption, airflow is improved – an essential factor in managing thermal efficiency within data centre environments.

With better airflow, energy efficiency is also boosted. Cooling systems can operate with less strain, directly lowering the energy required to maintain optimal operating temperatures. This translates to substantial cost savings on power consumption, which is one of the largest operational expenses in data centres.

Moreover, the modular design also enhances serviceability and futureproofing. Operators can scale the infrastructure incrementally, expanding it as needed without replacing entire systems or undergoing disruptive overhauls. With bandwidth and energy-usage demands continuing to surge as a result of machine learning and IoT, this is another welcome addition to the offering of ribbon cable systems.

SHAPING THE FUTURE OF FIBRE CONNECTIVITY

As data centres prepare for the next wave of connectivity, from 400G/800G deployments to the ever-expanding role of AI, the importance of an adaptable, high-performance cabling system cannot be overlooked. Combining speed, density, and operational simplicity into a modular, cost-effective framework, ribbon-based architectures offer a compelling foundation for the future.

By rethinking the physical layer from the ground up, solutions like the HUBER+SUHNER Ribbon End-to-End system are not only helping operators keep pace with industry demands; they are also setting new standards for performance, scalability, and efficiency.

HUBER+SUHNER, hubersuhner.com

SOLVING THE AVIATION INDUSTRY’S NETWORKING CHALLENGES

Silvia Boiardi of SITA and Phil Oultram of Alkira outline why the aviation industry’s ‘digital nervous system’ demands a new networking model built for a global scale.

For most businesses, cloud computing is an easy choice, offering clear advantages like scalability, flexibility, and cost-effectiveness. However, for the aviation industry, it presents a set of unique and significant challenges. The core problem is the sheer global scale of the aviation industry.

With planes, passengers, and cargo constantly on the move, everything depends on a “digital nervous system” that is not just reliable, but also continuously secure. A single

hiccup in this network can cause a domino effect of operational failures, financial penalties, and negative experiences for passengers. This makes having a resilient and adaptable network crucial.

The network is the aviation industry’s central nervous system. It’s a vital, real-time tool that allows flights to dynamically change routes and airport staff to instantly reassign resources when unexpected delays occur.

THE FRAGMENTATION PROBLEM

Airlines and airports often use a mix of on-premises infrastructure and multiple regional public cloud services to manage their global business functions. What was intended to simplify operations has instead created a logistical nightmare.

Airlines operate across many countries and use various cloud providers. Each provider has its own logic, processes, and schedules – all of which require specific skills and knowledge to manage. This complexity can quickly turn a multi-vendor cloud strategy into a disorganised “spaghetti junction,” with each cloud vendor acting like a self-contained ‘island,’ lacking seamless integration.

This fragmented approach creates significant complexity and makes managing the network increasingly difficult, especially with the growing data migration fuelled by the adoption of artificial intelligence (AI). Instead of creating a streamlined system, the aviation industry faces the continuous challenge of managing a patchwork network.

The core issue isn’t a lack of technology, but rather its fragmented implementation. The traditional method of piecing together onpremises and regional cloud environments creates a brittle, cumbersome patchwork that lacks visibility and is challenging to manage, making it nearly impossible to maintain consistent global connectivity, optimise infrastructure, or respond with the agility businesses need today.

The rigid, hardware-centric architecture of the past is also too slow. The time and resources required to deploy physical routers and firewalls at every new location significantly hold back innovation and increase costs, creating a bottleneck for business growth.

MODERN NETWORKING DEMANDS

As more business operations move to the cloud, the risk of cyberattacks also rises dramatically. A modern solution must confront this by offering precise, fine-grained control and adopting a “zero trust” approach. This will become more crucial as new technologies

like AI, business intelligence, and private 5G networks are integrated. The solution must incorporate these innovations seamlessly without compromising security.

A fit-for-purpose solution must address these challenges by prioritising agility and simplicity to provide users with several key benefits:

• Simplified management and security –

A modern, multi-cloud network gives users a single view of their entire system, providing actionable insights for security and operations across all cloud platforms, including airport and off-airport environments. This unified approach makes it easier to manage a complicated network and respond to issues quickly.

• Flexibility and innovation – By avoiding reliance on a single vendor, users gain the freedom to switch providers without major disruption. This allows them to integrate new cloud services into their existing ecosystem, accelerating innovation and the adoption of new technologies. They can also select the best-of-breed service providers for specific tasks, such as data analytics or machine learning, ensuring they always use the best tool for the job.

• Resilience and cost control – A modern solution also provides built-in system resilience to protect against outages from any single cloud provider, ensuring operations remain online. It also offers flexible subscription models, with options to scale network and service capacity elastically to help control costs more effectively.

Beyond infrastructure, a modern solution must provide segmentation and micro-segmentation capabilities, giving users precise control over

network access to minimise attack surfaces. By dividing the network into isolated segments, potential threats can be contained, preventing them from spreading laterally and dramatically improving security.

Another key component is an extranet-as-a-service that simplifies and secures connections with external business partners. Instead of complex manual configurations for each partnership, a modern solution streamlines connectivity, allowing businesses (like airlines) to easily and securely collaborate with vendors and partners.

This modern model liberates the industry from the physical constraints of legacy systems. It enables rapid scaling and deployment, abstracting the complexity of the underlying network to provide a simple, flexible, and powerful service. A centralised portal can offer complete network visibility and control, simplifying the management of multiple cloud and on-premises systems.

Ultimately, this approach gives airports and airlines the flexibility and agility to navigate an evolving landscape, building a foundation that keeps their operations running 24/7 and propelling them into a new era of innovation and secure operational excellence.

SITA, sita.aero | Alkira, alkira.com

Premium solutions. Powerful partnerships.

As density and performance requirements escalate, your infrastructure must elevate.

Proven deployment, no compromise, every time.

White space redefined: Elevate fibre, intelligent racks, smart power, DCIM, containment and liquid cooling.

Explore the portfolio: elevate.excel-networking.com

iPDU High Density Fibre Connectivity Racks and Containment Fibre Duct

COLOCATION, CONNECTIVITY, AND CONTINUITY: FUTURE-PROOFING NETWORK INFRASTRUCTURE

Warren Aw, Chief Commercial Officer at Epsilon Telecommunications, highlights why agile, high-capacity connectivity is the critical ingredient for resilience in an era of relentless digital demand.

In today’s digital landscape, business IT environments are becoming increasingly sophisticated and, with that, more complex. Whether it’s an enterprise working to stay ahead of increasingly digitally savvy consumers, or a service provider keeping those enterprise services and workloads up and running, network downtime is no longer an option.

Downtime is more than just an inconvenience; it’s a major threat to revenue and reputation. For 90% of mid-to-large-sized enterprises, just one hour offline can cost more than $300,000 (ITIC) (£221,000). Despite this, many businesses

are still relying on infrastructure that wasn’t built for the scale, speed, or strain of today’s digital demands.

Whether the services are mission-critical or not, a bad online experience can make or break customer relationships in an instant. Customers now expect always-on availability for a wide range of services, such as streaming video content, collaborating in the workplace, performing financial transactions, or accessing cloud services. Business continuity was once a contingency plan, but it has now become a competitive advantage.

That being said, ensuring continuity is also becoming more difficult due to growing data volumes, AI workloads, rising user expectations, and a more distributed business application ecosystem. This, coupled with real-world constraints like power limitations, infrastructure strain, and inconsistent SLAs, is making it more important than ever for businesses to re-evaluate their network and business continuity strategies to stay resilient, particularly if legacy infrastructure is still in play.

Colocation, when combined with agile, high-capacity connectivity, can provide a simpler, smarter way for businesses to keep service access and delivery both online and ahead in a competitive market. Colocation really is more than just racks and servers; it’s an opportunity to future-proof network infrastructure with adaptability, scalability, and reliability at the core.

LEGACY INFRASTRUCTURE LIMITATIONS

As businesses deploy more data-intensive applications, compact edge computing devices, and AI workloads, rising demands are putting increased strain on legacy infrastructure and on-premises environments. This includes:

• Power constraints – Modern applications require newer, high-density equipment, which significantly increases power requirements.

• Downtime risks – Legacy infrastructure and single points of failure raise the likelihood of outages, damaging SLAs, revenue, and brand reputation.

• Business continuity gaps – Without resilient infrastructure and built-in redundancy, organisations face growing challenges in maintaining always-on availability.

• Scalability challenges – On-premises infrastructure can be slow and expensive to scale in response to customer demands or new market opportunities.

• High costs – Cooling, power, staffing, and maintenance are stretching budgets and internal team resources.

• Inter-provider complexity – Managing connectivity across multiple clouds, partners, and carriers is complex, time-consuming, and prone to performance issues without the right interconnect fabric.

These limitations are pushing IT leaders to look for modern, flexible infrastructure strategies that can grow with their business.

COLOCATION FOR BUSINESS CONTINUITY

Colocation is more than just renting space in a data centre; it’s a strategic way to strengthen business continuity while simplifying IT infrastructure. Instead of maintaining costly on-premises facilities, organisations can host critical infrastructure in purpose-built, third-party data centres.

This shift not only reduces capital expenditure, but also enables teams to focus on innovation rather than infrastructure. Colocation provides robust power, security, and carrier-neutral connectivity to a global network ecosystem designed to prioritise uptime, resilience, and reach.

One of the key advantages of colocation is dual-site access, which allows businesses to distribute their infrastructure across two geographically separate, interconnected facilities. This setup is vital for disaster recovery and redundancy planning. If one site experiences a disruption – whether due to a power failure, natural disaster, or hardware issue – traffic and workloads can seamlessly fail over to the second site, minimising downtime and ensuring uninterrupted service delivery.

Colocation also supports business continuity by offering high-speed, low-latency connectivity to clouds, carriers, and partners. On top of this, it offers physical security and environmental controls that exceed most in-house capabilities, as well as power and cooling infrastructure designed for high-density, mission-critical workloads.

Beyond continuity, it brings cost-efficiency, operational simplicity, and access to a broader ecosystem of services. Colocation enables enterprises and service providers to focus on delivering value, rather than managing infrastructure.

MITIGATING RISK, MAXIMISING UPTIME

With increasingly complex IT environments and 24/7 availability becoming the new norm, having the right infrastructure in place is crucial. Colocation offers a practical, scalable way to

support business continuity, reduce risk, and stay flexible in a changing landscape.

Epsilon offers colocation services across key hubs in London, Singapore, New York, and South Korea. Each facility provides 99.999% uptime and robust power backup, as well as direct access to our global network fabric of over 500 data centres, clouds, and internet exchanges via our NaaS platform, Infiny.

By future-proofing network infrastructure, colocation can maximise uptime, improve customer experiences, and build new competitive advantages that can support long-term business goals.

Ultimately, colocation provides the stable foundation that organisations need to safeguard operations in an unpredictable world. Business continuity is no longer a backup plan; it’s a competitive differentiator.

DATA PROTECTION VS CYBER RESILIENCE: MASTERING BOTH IN A COMPLEX IT LANDSCAPE

From ransomware to regulatory risk, Sean Tilley,

Senior Sales Director EMEA

at 11:11 Systems, outlines why businesses must treat data protection and cyber resilience as inseparable priorities in the fight for operational continuity.

Today’s always-on, hyperconnected world requires CIOs (Chief Information Officers) to confront two equally important concepts: data protection and cyber resilience. As reliance on data to fuel analytics, engineering, marketing, and other key operations increases, the complexity surrounding IT infrastructure grows in tandem. Hybrid workforces, edge computing, cloud-native applications, and legacy systems add further complexity to the mix.

Meanwhile, the rise in sophisticated cyberattacks – compounded by escalating cyber insurance costs, pressure to drive down operational costs, and the need for 24/7 uptime – calls for stronger defences, as well as smarter, faster recovery strategies.

The question is no longer whether companies should prioritise data protection or cyber resilience, but rather how to integrate both effectively and sustainably.

MORE DATA, MORE POINTS OF FAILURE

For many organisations, IT systems span on-premises data centres, hyperscale cloud platforms, mobile endpoints, and edge devices. Each of these points presents its own set of risks and recovery complexities.

Add to this the proprietary nature of the data being handled and stored on these systems and the stakes grow higher. A single vulnerability can result in a major breach that jeopardises the organisation, impacts customer trust, and raises flags regarding regulatory compliance – resulting in hefty fines and other knock-on costs.

DISASTER RECOVERY IS NOT ENOUGH

Traditional disaster recovery (DR) approaches designed for catastrophic events and natural

disasters are still necessary today, but companies must implement a more securityevent-orientated approach on top of that.

Legacy approaches to disaster recovery are insufficient in an environment that is rife with cyberthreats; this is because they focus on infrastructure, neglecting application-level dependencies and validation processes. Furthermore, threat actors have moved beyond interrupting services and now target data to poison, encrypt, or exfiltrate it.

As such, cyber resilience needs more than a focus on recovery; it requires the ability to recover with data integrity intact and prevent the same vulnerabilities that caused the incident in the first place.

WHAT CYBER RESILIENCE LOOKS LIKE

Cyber resilience requires a proactive approach based on the assumption that breaches will occur. It also demands a shift in strategies, paying particular attention to the following:

EVENT-TRIGGERED RECOVERY

Recovery should not wait for human interventions or decision-making. Modern environments must integrate with intrusion detection systems (IDS), SIEM tools, and behavioural analytics to identify anomalies and initiate recovery processes when anomalies in data are detected. This necessitates a more stringent recovery process to ensure data cleanliness, which is important if it affects customer or employee data.

RUNBOOKS OVER FAILOVER PLANS

Failover plans, which are common in disaster recovery, focus on restarting virtual machines (VMs) sequentially but lack comprehensive validation. Application-centric recovery runbooks, however, provide a step-by-step approach to help teams manage and operate technology infrastructure, applications, and services.

This is key to validating whether each service, dataset, and dependency works correctly in a staged and sequenced approach. It is also essential as businesses typically rely on numerous critical applications, requiring a more detailed and validated recovery process.

ISOLATED CLEAN ROOMS FOR RECOVERY

Recovering in production environments can be risky. However, having isolated ‘clean room’ environments enables organisations to restore systems and validate their integrity without the threat of malware, compromised code, or other vulnerabilities. This process ensures that systems are secure before they are reintroduced into the on-premises environment or other appropriate locations.

RECOVERY PRIORITISATION BY BUSINESS IMPACT

Not all data and applications across an organisation are equal. Systems that are crucial for customer engagement or revenue generation – such as e-commerce platforms or engineering CAD systems, for example – may require near-instant failover capabilities to ensure operations are uninterrupted, even in the event of unexpected failures. Fewer critical workloads, however, may withstand several hours of downtime.

Thus, it is important to define recovery time objectives (RTOs) and recovery point objectives (RPOs) based on the specific needs of each system across the company.

THE MISSING LINK BETWEEN PLANNING AND EXECUTION

The aforementioned strategies are meaningless without regular testing, yet many organisations consider it a checkbox compliance exercise, overlooking the importance of this final step in the process. Regular testing provides the best defence against human error, assumptions, and silent system drift.

To maximise the benefit of a cyber resilience strategy, companies should conduct tests for frequently updated systems every month. Scenario-based tabletop exercises should take place quarterly, and full failovers in clean room environments should occur annually to assess real-world preparedness.

DON’T IGNORE THE FRONTLINES

The shift to hybrid work has extended the threat surface, as mobile devices, remote workstations, and IoT devices often hold sensitive or missioncritical data which is not monitored or secured. This is due to their distributed or decentralised nature, making it challenging, particularly when located in remote areas.

Moreover, these devices may receive fewer software updates, leaving vulnerabilities open to exploitation. These factors make them an attractive target for threat actors.

Security teams cannot afford to overlook these points and must implement data security strategies that scale to the edge, tailoring recovery point objectives (RPOs) based on user roles and data sensitivity to ensure that critical data is prioritised for recovery – thereby minimising the impact on operations and maintaining cyber resiliency.

CYBER RESILIENCE: PREPARING FOR ‘WHEN’ NOT ‘IF’

Cyber resilience is now essential. With ransomware that can encrypt systems in minutes, the ability to recover quickly and effectively is a business imperative. Therefore, companies must develop an adaptive, layered strategy that evolves with emerging threats and aligns with their unique environment, infrastructure, and risk tolerance. To effectively prepare for the next threat, technology leaders must balance technical sophistication with operational discipline.

The best defence is not solely a hardened perimeter; it’s also having a recovery plan that works. Today, companies cannot afford to choose between data protection and cyber resilience, they must master both.

11:11 Systems, 1111systems.com

CRITICAL DATA NEEDS GOVERNANCE

– ELSE IT CAN’T BE TRUSTED

Mary Hartwell, Global Practice Lead, Data Governance at Syniti, explains how governance transforms critical data from a liability into a trusted driver of business outcomes.

With data lakes, real-time pipelines, and AI models, it’s easy to get swept up in the latest data management tools and architectures. But no matter how modern your stack looks, the real question is: can you trust the data that’s driving your business decisions?

Let’s face it, not all data is created equal. Some is more important than others. That’s your critical data – the stuff that drives major business decisions, fuels outcomes, and keeps your organisation moving forward.

Think customer master records, financial statements, compliance reports, or supply chain details. When that data is wrong or inconsistent, it puts your entire business at risk.

That’s why data governance isn’t optional; it’s mandatory. Governance is what gives your critical data the structure needed to be accountable and trusted business wide.

Without it, your most important business data is just another liability instead of an asset.

CRITICAL DATA: THE FOUNDATION FOR BUSINESS OUTCOMES

Every strategic decision should be fuelled by quality data. Whether you’re expanding into a new market, adjusting pricing, or responding to regulators, it all depends on critical data and the quality of that data. It’s the heartbeat of your business: when it’s healthy, everyone moves to the same beat and everything flows smoothly.

But when it’s messy? Decisions stall. Teams argue over whose numbers are right. Customers get frustrated. Compliance audits turn into fire drills. Overall, time is wasted, money is wasted, and you’re not any closer to the solution.

Governance ends the arguments, making data the referee. It establishes standards for accuracy, consistency, and timeliness. That alignment is what turns critical data into a true driver of business outcomes and avoids confusion and frustration amongst your team.

CUTTING THROUGH CONFLICTS: ONE VERSION OF THE TRUTH

The truth is: without governance, critical data often exists in silos. Finance might define revenue one way, while Sales defines it another. Marketing may have one version of a customer record, while Operations has a slightly different one. That’s how you end up with bad or conflicting data, and conflicting data means conflicting decisions.

Governance doesn’t add rules; it removes debates. It delivers a single source of truth across the organisation. Everyone works off the same definitions, the same standards, the same trusted information. That alignment reduces risk and speeds up decisions business wide, so your team can spend time on business goals instead of wasting time reconciling differences and ironing out confusions.

COMPLIANCE AND RISK MANAGEMENT BUILT IN

Critical data is often tied directly to legal and regulatory requirements. Whether it’s financial reporting, patient records, or customer privacy, regulators expect data to be accurate, traceable, and secure.

Data governance provides exactly that:

• Audit trails that show where data came from and how it’s been used

• Accountability for who owns and manages critical data

• Access controls to ensure only the right people see sensitive information

That means you can show not only what decisions were made, but also what data they were based on and who had access at each step. The payoff is twofold: lower risk of compliance failures and stronger confidence that your data is defensible when challenged.

DATA QUALITY IS EVERYONE’S PROBLEM

Most traditional governance frameworks are built for IT, not for the people who actually use the data. They’re too rigid, too manual, and not aligned with business goals. As a result, businesses get lots of rules and lots of control, but not a lot of usability.

Business-aligned governance, including tools like Master Data Management (MDM) and automated matching, makes data governance meaningful for the business. By automatically reconciling key records like customers, products, suppliers’, MDM keeps your data lake harmonised and trusted with minimal effort.

The results are faster insights, better decisions, and far less time spent cleaning up messy data. Businesses need to remember that data scientists didn’t sign up to be data janitors, so afford them with the right tools and high-quality data, and let them build models and deliver real insights instead of wrangling spreadsheets.

GOVERNANCE AS A BUSINESS ENABLER

It’s easy to see governance as red tape, but the truth is the opposite. In fact, a shift in mindset around governance – by beginning to see it as a partner – means that policies and frameworks can actually empower the business to move faster and with confidence. With governance:

• Leaders get accurate, consistent insights to guide strategy.

• Teams are empowered to work towards shared goals.

• Compliance without complexity means risk is reduced, with audit trails and accountability built in.

• Business continuity is supported with quick access to trusted data.

• Sensitive information is safeguarded while still being accessible.

Critical data is the lifeblood of your business. It drives decisions, supports compliance, and keeps operations steady and focused on business objectives. However, if not managed properly, it can quickly become a nightmare for your business.

This isn’t about control; it’s about confidence. Governance is a guardrail, not a roadblock, providing the structure and alignment needed to keep critical data accurate, consistent, and secure. It doesn’t slow you down; it keeps you from running off course, transforming risk into opportunity. It transforms critical data from a vulnerability into a competitive advantage and provides a single source of truth your business can rely on.

So, if you want your critical data to be an asset and not a gamble, make governance the foundation of your strategy. Because when it comes to the information that drives your business forward, there’s no room for guesswork – only trust.

Syniti, syniti.com

SECURING AI INFRASTRUCTURE: WHY PHYSICAL

RESILIENCE MATTERS

Michael Vallas, Global Technical Principle at Goldilock Secure, argues that data resilience cannot stop at software.

AI data centres are rapidly weaving themselves into the nervous system of the digital economy. They carry vast amounts of sensitive data and demand near-constant uptime, making them indispensable, but also increasingly exposed to risk.

By the end of 2025, 33% of global data centre capacity is expected to be dedicated specifically to AI workloads, highlighting how quickly systems are shifting to handle more powerful computing needs. As demand for AI grows, so too does the threat landscape surrounding both the physical and virtualised infrastructure that powers it. Cybercriminals know this, as do nation-state actors. In response, the UK government is tightening its regulatory stance on Critical National Infrastructure (CNI) and it’s now time for the data centre security conversation to evolve.

Current protection models are still heavily skewed towards software-first strategies, but relying solely on software to safeguard vital infrastructure is akin to trying to build a firewall in a burning forest. Almost every cybersecurity battle is still fought in code, with attackers exploiting bugs and misconfigurations, leaving defenders trapped in a cycle of software versus software.

It’s time to step back and look at the bigger picture: true resilience comes from extending protection to the physical layer. Data centres cannot be secured by fences or patches alone; control over the physical pathways that carry data in and out – and the networks that connect critical systems – is vital. By shifting the fight to this layer, we create a stronger, simpler line of defence that attackers cannot manipulate remotely.

THE CNI STATUS OF DATA CENTRES DEMANDS MORE

In September 2024, the UK officially designated data centres as part of its CNI, which means these environments must now meet a far higher bar for resilience at both the software and connectivity level.

The risks are not theoretical. A single breach in an AI-powered system – whether in food production, energy, transport, or revenue systems central to the economy – has the potential to damage national interests and disrupt everyday life. Attacks on training environments or inference pipelines could compromise intellectual property, corrupt critical outputs, or bring any number of essential services to a standstill.

In October 2024, McKinsey reported that global demand for data centre capacity is projected to grow by around 22% annually through 2030. As these facilities scale to meet AI demands, their interconnectedness becomes a rising cyber-liability, especially in colocation and hybrid environments where lateral movement can amplify a single point of failure. AI infrastructure needs to be trusted. That

trust must extend beyond digital layers; it must be embedded in how these systems are physically structured, segmented, and defended.

THREATS GO DEEPER THAN SOFTWARE

Organisations face highly evasive ransomware, insider threats, and lateral movement within complex, multi-tenant or virtualised environments. The danger intensifies when software controls are misconfigured or compromised, or when detection lags behind. After all, spotting a breach often means searching for the faintest signals hidden within trillions of daily events – a task where delays can give attackers the upper hand.

Firewalls and endpoint security remain essential parts of a robust defence strategy, but like all software-based measures, they have inherent limits. In CNI-classified environments, even a brief compromise can disrupt national services or expose sensitive AI-driven data flows. That’s why physical cyber resilience has become a critical complement to existing tools.

THE CASE FOR HARDWAREENFORCED SEGMENTATION

Hardware-enforced segmentation offers a powerful, modern, and pragmatic solution for AI-centric and CNI-classified environments. When implemented correctly, physical isolation technologies can instantly and remotely disconnect individual or zones of compute, storage, or network at the physical layer – outside the reach of malware, misconfigurations, or insider tampering. If a threat can’t reach the system, it can’t compromise or leverage it.

This class of physical connection controller operates with no IP address, no hypervisor dependence, and no software footprint, making it invisible to threat actors operating within the network. Their control channels – which are out of band – ensure that even when the main data network is compromised, supervisors retain the separate and protected ability to isolate critical infrastructure entities and zones.

Importantly, this isolation doesn’t mean downtime. Critical systems can continue running safely in an offline state, maintaining core operations while remaining unreachable to attackers. With smart, on-demand disconnection, organisations can make deliberate, business-driven decisions about when to be connected and when to be offline.

HOW DATA CENTRES CAN DEPLOY PHYSICAL ISOLATION

Physical isolation proves especially effective in high-risk and highly regulated environments where traditional defences may fall short.

1. In colocation facilities, it helps prevent cross-tenant lateral movement by enabling automated segmentation during suspected breaches.

2. Within enterprise IT, critical administrative systems can be isolated during high-risk operations or when threats are detected, reducing the potential impact of compromise.

3. For cloud and backup environments, physical disconnection ensures that backups are only connected during controlled windows, significantly mitigating the risk of ransomware targeting these systems.

4. Disaster recovery (DR) sites benefit from remaining physically offline until the exact time windows when they are needed, ensuring a clean, uncompromised backup is always available to support service restoration after attack.

5. In government, finance, and other critical sectors, hardware-based segmentation helps keep systems separate according to who needs access and how sensitive the data is. This strengthens compliance and makes the whole organisation more secure.

SECURING THE FUTURE OF AI INFRASTRUCTURE

As AI becomes more deeply embedded in how governments, businesses, and societies function, the reliability of the infrastructure behind it becomes mission critical. These data centres are powering autonomous processes, decision-making, national security analysis, healthcare diagnostics, and financial systems. In all of these scenarios, even short-lived breaches can have dramatically magnified consequences.

The NCSC is urging organisations to build in the capability to fully disconnect critical systems from networks – a move that not only strengthens defences today, but also signals that such measures could soon become a regulatory requirement. Incorporating this kind of physical safeguard adds a vital layer of defence, ensuring AI-driven services can operate with confidence in the face of evolving threats. Embedding such capabilities now will help organisations stay ahead of potential regulation and future-proof the systems that will shape our digital world.

Goldilock, goldilock.com

POST-QUANTUM READINESS: SECURING DATA, DOCUMENTS, AND

CUSTOMER

TRUST

Paul Holt, GVP of EMEA at DigiCert, explores why data centres must adopt postquantum cryptography to safeguard trust, compliance, and resilience.

Data centres are critical infrastructure for the modern world. Like clean water systems, telecommunications, and power grids, they maintain, store, and protect the 21st century’s most valuable resource: data.

As such, they are a lynchpin on which countless businesses and other organisations rely. Failures here will be costly to both the data centre operators and those whose data they hold. While most data centres remain secure and compliant today, the emergence of quantum computing will test their resilience and readiness for the future.

PREPARING FOR THE QUANTUM ERA

Quantum computing is set to come in just a matter of years and, according to many, it will

change the face of computing for both good and ill. Due to its use of qubits, quantum will be able to make complex calculations far faster than any previously existing form of computing. This heralds huge benefits for a range of fields. It also, however, poses one huge threat: quantum computers will be able to break most of the encryption on which the world and data centres rely to keep data safe, secret, and secure.

This will put data centres’ precious holdings immediately at risk, especially considering that threat actors which could hold quantum capabilities will likely pursue high-value targets like these. A quantum-compromised data centre, then, is a serious risk for everyone involved in it, and potentially the broader society around it.

The US-based National Institute of Sciences and Technologies (NIST) has already begun releasing post-quantum cryptographic (PQC) algorithms for the wider world to use, including CRYSTALS-Kyber for key exchange and CRYSTALS-Dilithium/Falcon for signatures.

The task ahead for many data centres is how to enhance cryptographic management and install crypto-agility so that PQC algorithms can be switched out as quantum threats emerge. However, that’s far easier said than done. For many organisations, it will require a serious amount of infrastructural change before they’re ready to do that.

THE COMPLIANCE GAP

When quantum does arrive, data centres will likely be thrown out of compliance with the collection of data protection regulations with which they must currently comply. In much data protection regulation, data centres are considered processors and are bound to place adequate cryptographic protections in place to defend the data they’re charged with.

Compliance frameworks often require cryptographic protections that underpin content trust – the assurance that data remains accurate, secure, and reliable –and extend to document trust, ensuring the authenticity and integrity of records,

certificates, and other critical documents. Document trust is a significant business enabler and a main actor in digital transformation projects with ROIs that highlight time, cost, and efficiency.

In Europe, eIDAS regulation with EIDs and digital wallets – the ability to digitally sign documents with the right quality of digital identity – is fast becoming second nature to private citizens and businesses. In addition, organisations are often expected by data protection and security regulations (such as GDPR) to implement safeguards such as:

• The pseudonymisation and encryption of personal data

• The ability to ensure the ongoing confidentiality, integrity, availability, and resilience of processing systems and services

Cryptography can play a central role in meeting these requirements, and similar expectations appear across national, international, and sector-specific frameworks. Failure to align with such standards can lead to audits, financial penalties, and reputational harm. Maintaining the integrity of content and the authenticity of documents is therefore vital for regulatory compliance and sustaining customer trust.

WHY DATA SOVEREIGNTY MATTERS

One other key consideration here is the sovereignty of data which many regulations focus on. Data sovereignty demands that data, including private keys, certificate data, and personal information, be processed (and often stored) within the same jurisdiction. That data can transit through other jurisdictions, but it can only ever be decrypted in the regulated one. If a malicious party were to steal that data and decrypt it in a different jurisdiction, then that would likely pose a compliance threat in itself. Quantum will allow malicious parties to do just that. In fact, “harvest now, decrypt later” attacks are already happening as threat actors attempt to steal tranches of data that – although they cannot currently – they will be able to decrypt when they get their hands on quantum capabilities.

As quantum computing draws near, regulators will increasingly expect data centres and other processors to adopt postquantum cryptography or risk being deemed negligent. Requirements will likely be updated to explicitly include quantum threats in risk

assessments and resilience strategies. Data centres responsible for the data of multiple organisations will need to begin preparing now, ensuring compliance, safeguarding sensitive information, and maintaining content trust.

ACT NOW: FUTURE-PROOF YOUR DATA CENTRE

Post-quantum transformation is not just a defensive measure; it’s a strategic opportunity. Data centres that implement PQC will be able to offer customers stronger assurances of security, compliance, and trust in both data and documents.

Trust is the foundation of the data centre business; maintaining it in a rapidly evolving threat landscape requires proactive adaptation to the quantum era. By securing content and document trust, data centres can protect their customers, preserve regulatory compliance, and reinforce their reputation as reliable stewards of critical digital assets. Those who act now will be best positioned to thrive in a post-quantum world.

DigiCert, digicert.com

QUANTUM COMPUTING AND REAL ESTATE: A NEW BLUEPRINT FOR DATA INFRASTRUCTURE

Daniel Thorpe, EMEA Data Centre Research Lead at JLL, examines how quantum’s rise is creating a new class of real estate – from specialised cryogenic labs to hybrid facilities blending AI, classical, and quantum processing.

The technological landscape is in constant flux, with each new wave bringing significant changes across industries. Just as artificial intelligence (AI) has influenced the data centre sector, quantum computing is poised to trigger a notable development in real estate. This rapidly advancing field is not merely a distant scientific curiosity; it is a practical driver that will impact property demands, investment strategies, and the infrastructure of the digital world.

At its core, quantum computing leverages the principles of quantum mechanics – superposition and entanglement – to solve problems intractable for even the most powerful classical supercomputers. This potential, demonstrated by quantum machines completing calculations in minutes that would take traditional systems millions of years, has ignited a fervent race for commercial viability, with many experts now believing a breakthrough could occur as early as 2030. This accelerating progress is mirrored by a dramatic surge in investment, signalling a critical inflection point for the real estate sector.

EMERGING INVESTMENTS

Following a trajectory strikingly similar to AI’s, albeit lagging by about a decade, quantum investments are projected to reach an astounding $10 billion (£7.3 billion) annually by 2027 and $20 billion (£14.7 billion) by 2030.

A significant “quantum advantage breakthrough” – where a quantum computer demonstrably outperforms classical systems for a useful problem – could trigger an additional $50 billion (£36.9 billion) surge, akin to the ChatGPT effect on AI funding.

This acceleration is further fuelled by recent breakthroughs in quantum processing units (QPUs) from tech giants like Google, IBM, Microsoft, and Amazon, which have significantly advanced error correction and qubit counts. This unprecedented influx of capital isn’t just fuelling research; it’s driving the demand for specialised physical infrastructure, creating a fundamentally new asset class.

SPACE TO COMPUTE

Unlike the relatively standardised requirements of traditional data centres, quantum computing demands environments of extreme precision and control. Quantum bits, or qubits, are incredibly fragile, susceptible to even the slightest environmental interference, necessitating purpose-built research facilities. These facilities require sophisticated cryogenic cooling to operate at low temperatures, robust electromagnetic shielding to protect qubits from noise, and specialised foundations for vibration isolation. Such unique demands create a distinct real estate niche – far removed from the typical “white box” data centre – requiring specialised expertise in design and construction.

QUANTUM IN THE REAL WORLD

The initial phase of quantum real estate development is concentrating in specific, talent-rich “quantum hubs” across the globe. These ecosystems, often located near leading academic institutions and national research centres, are becoming magnets for investment and talent due to their strong research programmes, existing quantum facilities, robust government support, and a burgeoning private sector.

A prime example of such a hub, and indeed a blueprint for future developments, is the National Quantum Computing Centre (NQCC) at Harwell, near Oxford in the UK. This landmark facility, backed by £93 million in funding, perfectly embodies the principles of a thriving quantum ecosystem. It’s strategically located to leverage the academic prowess of Oxford and its mission is to accelerate the transfer of quantum technology from laboratory to industry.

This clustering phenomenon means that, for the foreseeable future, the majority of quantum investment will gravitate towards these established or emerging centres, making understanding their dynamics crucial for real estate professionals.

LOOKING TO THE FUTURE

While the immediate focus is on dedicated R&D facilities, the long-term trajectory of quantum computing points towards integration with existing digital infrastructure through “Quantum-as-a-Service” (QaaS).

As quantum technology matures and costs decline, QaaS will become a prevalent pathway to commercial adoption, allowing organisations to access quantum capabilities via the cloud. This will inevitably lead to the emergence of “hybrid data centres,” seamlessly combining classical computing, artificial intelligence, and quantum processing units under one roof, optimising data flow and computational efficiency.

This shift towards hybrid facilities – where quantum components operate within specialised, isolated environments alongside classical systems – presents a notable opportunity for the data centre market.

The largest cloud providers, who are already developing their own quantum chips, are strategically positioned to integrate these capabilities into their vast data centre networks, further accelerating this hybrid future. This evolution means that while initial quantum real estate is currently highly specialised, the eventual widespread adoption will influence the nature of data centre design and operation.

For real estate groups, this presents an opportunity to develop an early advantage by understanding the unique technical and environmental requirements, building relationships within emerging quantum ecosystems, and strategically investing in purpose-built facilities.

Ultimately, this isn’t just about providing space; it’s about supporting the next phase of technological development. Just as artificial intelligence has reshaped the digital landscape, quantum computing promises to offer solutions to complex problems across various sectors. For the real estate sector, understanding and engaging with the unique demands of quantum real estate today will be important for shaping and capitalising on the opportunities of tomorrow.

JLL, jll.com

WHY QUANTUM BELONGS IN THE DATA CENTRE CONVERSATION

Owen Thomas, Senior Partner at Red Oak Consulting, explains why quantum computing won’t replace classical systems, but will integrate with them instead.

Quantum computing is moving beyond the realm of academic research and entering real-world experimentation. While large-scale commercial advantage remains some distance away, early-stage integration of quantum capabilities into classical high-performance computing (HPC) environments is already taking shape.

For the data centre sector, this is a development that cannot be ignored. Quantum systems are not expected to replace traditional HPC architectures. Instead, they will complement them by enhancing specific subcomponents of complex workflows. The ability to integrate classical CPUs, modern GPUs, and quantum processors (QPUs) into one cohesive computing environment represents a significant shift. Data centres will be central to delivering and managing this hybrid model.

Rather than running entire applications on quantum processors, the current direction is to embed quantum into hybrid workflows. In these models, most of the computational load runs on classical infrastructure, while selected sub-tasks are offloaded to quantum systems. These typically include problems related to optimisation, quantum chemistry, or simulation of quantum systems.

Cloud service providers (CSPs) have emerged as the primary enablers of this model. Through Quantum-as-a-Service (QaaS) offerings, users can access quantum systems through the cloud and integrate them with their existing HPC workloads. This delivery model removes the need for enterprises to invest in specialist, on-premises infrastructure, including cryogenics, electromagnetic shielding, and vibration control.

By spreading costs across tenants, CSPs make quantum experimentation accessible to a wider range of organisations. For most, this is the only viable approach given the significant investment quantum hardware still demands.

DATA CENTRES AS PLATFORMS FOR HYBRID INTEGRATION

Quantum computing introduces new demands for data centre design and operation. Although quantum systems will remain physically separated from most enterprise environments, the data centre is becoming the platform where classical and quantum computing converge.

Data centres that support hybrid workflows must be capable of managing diverse processor types, handling high-speed interconnects, and providing the scheduling intelligence required to allocate tasks efficiently. The goal is not to colocate quantum systems within standard data halls, but to ensure classical infrastructure can interact seamlessly with quantum systems hosted remotely.

A clear example of this approach can be seen in emerging software platforms, including those developed by vendors such as NVIDIA, which aim to unify classical and quantum development environments. These tools allow developers to manage CPUs, GPUs, and quantum processors within a single workflow without requiring the systems to be physically colocated. This move towards software-defined orchestration highlights the evolving role of the data centre as an intelligent hub for managing

increasingly complex, hybrid workloads, rather than simply serving as a location for hardware.

THE IMPORTANCE OF MIDDLEWARE AND SCHEDULING

The successful integration of quantum into existing workflows depends on middleware. Intelligent schedulers, compilers, and orchestration tools are required to manage jobs across multiple processor types. These systems need to decide which parts of a workload should run on classical HPC, which benefit from GPU acceleration, and which are appropriate for quantum.

Middleware also manages performance optimisation, cost control, and resource availability. Without these layers, managing hybrid workloads at scale would not be practical.

Currently, many organisations are still working with siloed systems. However, the move towards unified platforms is accelerating. Open-source quantum software development kits, CSP-native orchestration tools, and vendor-specific APIs are becoming more mature and interoperable. Data centres must ensure they are equipped to host and support these tools within their environments.

EARLY USE CASES AND CURRENT LIMITATIONS

Several industries are already piloting hybrid quantum workloads. These include life sciences, financial services, and logistics. Use cases range from molecular simulation and materials discovery to quantum-assisted optimisation of delivery routes and investment portfolios. However, quantum is still expensive to operate. Even within CSP environments, job costs remain higher than classical equivalents. As a result, most current applications are exploratory. Production-scale adoption will depend on improvements in quantum stability, error correction, and software reliability.

That said, for specific problems where quantum can provide a computational edge, the performance benefits may justify the higher cost. These are often tasks that are intractable or inefficient on classical systems alone.

FUTURE-PROOFING DATA CENTRES FOR HYBRID HPC

Quantum computing, combined with classical and GPU-based HPC, is reshaping how workloads are distributed and processed. For data centre operators, this presents both a technical and strategic challenge.

Facilities must evolve to support hybrid workflows. This includes designing for highspeed connectivity, optimising energy usage, and enabling software-defined orchestration. Physical infrastructure may not need to host quantum hardware directly, but it must support the logical integration of remote quantum systems within local workflows.

At Red Oak Consulting, we work with organisations across research, government, and enterprise to evaluate how quantum will influence their long-term compute strategies. Our focus is on making these transitions viable, both from a technology standpoint and within existing operational frameworks. That includes assessing whether current data centre environments are prepared for hybrid HPC integration and what steps may be needed to improve readiness.

POWERING THE SHIFT

Quantum computing is becoming part of a wider move towards heterogeneous computing, where different processor types work in concert to solve increasingly complex problems. In this new landscape, data centres are more than infrastructure; they are the coordination layer that makes hybrid computing possible.

Operators that recognise this shift and adapt their environments accordingly will be best placed to support the next generation of workloads. That means enabling flexible networking, adopting middleware that bridges classical and quantum systems, and rethinking workload orchestration from the ground up.

Quantum won’t succeed in isolation; its future depends on seamless collaboration between classical compute, emerging quantum capabilities, and the data centres that bring them together. Those building for that convergence today are laying the foundation for tomorrow’s breakthroughs.

Red Oak Consulting, redoakconsulting.co.uk

HIGH-QUALITY CONNECTIVITY FOR THE AGE OF DIGITAL AND AI

David De Craemer, CEO of Aginode, on how tailored copper and fibre solutions are powering data centres, smart industries, and global digital transformation.

As AI adoption accelerates, data volumes are growing exponentially, placing new demands on network infrastructure in complex environments such as data centres, cloud platforms, and smart factories. Real-time data processing and seamless cloud connectivity are key in our hyper-connected world. Connectivity has become more than cabling; it’s a strategic asset shaping overall system performance and operational resilience.

CUSTOMISED SOLUTIONS FOR INDUSTRY-SPECIFIC REQUIREMENTS

Aginode delivers high-performance connectivity solutions engineered for

long-distance, high-speed data transmission, enabling resilience and efficiency. As different industries require tailored approaches to connectivity, Aginode rejects one-size-fits-all solutions, instead designing fully customised systems to meet sector-specific requirements.

In data centres, compact, high-speed connections between server racks are essential; in hospitals, robust, inter-floor, and inter-building connectivity is critical; and in airports, long-distance, flexible configurations are key. Aginode addresses these needs with ultra-reliable passive and active network architectures, developed in close collaboration with industry stakeholders.

GLOBAL BLUEPRINT: STRATEGIC EXPANSION IN KOREA

Driven by fast AI adoption, government-backed digital transformation, and smart infrastructure investment, Korea is a key hub for Aginode. A flagship example is Samsung Electronics, which relies on our Cat 6A cabling at its Pyeongtaek semiconductor facility and Giheung R&D centre.

At the Hwaseong HPC Centre, dedicated to AI and semiconductor computing, Aginode supplied high-density optical solutions optimised for performance, installation speed, and space efficiency. The partnership led to the co-development of three new products and four enhancements within the LANmark ENSPACE line. This collaboration exemplifies our global blueprint: targeting high-growth regions, supporting key verticals, and delivering customised, reliable connectivity.

Our portfolio increasingly emphasises connectorised, fibre-based, and customised solutions aligned with market trends and targeted verticals such as high-tech manufacturing, data centres, and the shift towards homes connect in the telecom space.

OUR REACH AND LEGACY

Headquartered in Paris, Aginode draws on over 100 years of network infrastructure expertise. With roots in Alcatel and Nexans – one of Europe’s top three cable manufacturers –Aginode became an independent company in July 2023, sharpening its focus on data connectivity solutions. Today, we support global digital transformation with copper and fibre-optic technologies for data centres, smart buildings, and telecom providers.

We combine sector expertise, innovation, technical precision, and a customer-first mindset guided by our values: ‘One Team, Drive, and Care’. Sustainability is central to our ‘Care’ value; rated by EcoVadis, we’re committed to lowering our CO2 footprint through regionalised manufacturing and responsible operations.

In Morocco, our plant has been expanded to offer tailored connectorisation close to customers. In December 2024, we inaugurated

a state-of-the-art facility in Shanghai, which will serve as a global engineering hub and innovation lab for LAN and data centre solutions. Both plants focus on high-end copper LAN solutions, pre-term data centre fibre connectivity, and advanced solutions for other demanding verticals. With global expertise and strong local execution, Aginode is positioned to play a defining role in the world’s broadband ecosystem.

Aginode, aginode.net

Our core principles:

• Value creation for partners

• Excellence in our execution

• Unwavering adherence to our values

Strategic Growth Roadmap to 2030: Structured around five key streams

• Targeting high-growth, innovation-driven

• regions

• Pursuing a vertical market approach

• Building scalable, ultra-reliable

• connectivity solutions

• Deepening integration into the

• broadband ecosystem

• Embedding agility across all levels of

• the organisation

DTX LONDON 2025: 20 YEARS OF TRANSFORMATION

DTX London returns to ExCeL on 1–2 October 2025, marking its 20th anniversary with the theme, ‘Innovation with integrity; driving value, delivering purpose’.

DTX London has been reimagined to unite people, technologies, and strategies that drive meaningful, long-term business change. Every stage will tackle the real-world challenges facing organisations today, with an educational programme that highlights the critical role of people in transformation.

Headlining day one is Olympic Champion Mo Farah, who will discuss how business leaders can turn aspirations into reality. Other standout speakers include Jason Hardy, CTO for AI at Hitachi Vantara, who will explore why AI’s future depends on data integrity, and Alan Reed, Head of Platform Innovation at bet365, who will address why so many AI initiatives fail.

The Main Stage will feature leaders from the new DTX Advisory Board – representing Segro, Apollo, Santander, Vanquis, the NFL, and RSA –who will share first-hand insights on delivering projects designed for lasting success. Panels will also tackle themes from navigating geopolitical pressures to building the workforce of the future, with senior voices from Vanquis, Lloyds Banking Group, and the University of East London.

Cyber will be embedded across the agenda, reflecting its role as a business-wide priority.

The Holistic Cyber Strategies Stage will host experts from Deliveroo, Citi, and the

Information Security Forum, discussing how to approach AI responsibly and how to prepare for quantum threats.

Attendees can also join a fireside chat with Bryan Glick, Editor of Computer Weekly, in conversation with Sharon Gunn, CEO of BCS, on lessons from the Post Office scandal and the importance of people in technology.

Colocated with UCX, DTX London 2025 is designed for anyone shaping technology strategy, offering the latest innovations, expert insights, and networking opportunities – all under one roof – and you can get your own free pass here.

DTX London, dtxevents.io

20YEARS OF

THE INTERSECTION OF

REAL-WORLD CASE STUDIES

SOLUTIONS THAT FIT

INTERACTIVE WORKSHOPS

PEER-TO-PEER ROUNDTABLES

ITSM & OPERATIONS

DATA MANAGEMENT

CYBER SECURITY IT INFRASTRUCTURE & CLOUD

COMMUNICATIONS & COLLABORATION

CUSTOMER EXPERIENCE

ENERGY LIVE 2025: THE FULL ENERGY ECOSYSTEM UNDER

ONE ROOF

Reuters’ flagship event returns to Houston on 9–10 December, uniting more than 3,000 executives to tackle power demand, grid modernisation, and the future of energy.

Energy LIVE 2025 is the essential meeting point for leaders across the US energy landscape. Taking place in Houston over two packed days (9–10 December), the event convenes over 3,000 senior executives from oil & gas, power generation, utilities, renewables, nuclear, and digital infrastructure.

With senior speakers from Google, EDP, Commonwealth LNG, PJM, SPP, Southern Company, BP, Murphy Oil Corporation, NVIDIA, OXY, and more, the event delivers direct access to the people shaping the future of energy.

Key themes include:

• Surging power demand and data centre growth

• Grid modernisation, AI, and infrastructure

• Future of nuclear energy and SMRs

• Innovation, investment, and operational efficiency

• LNG markets and energy security

Built on Reuters’ deep industry research, Energy LIVE offers a strategic, curated experience, with three interconnected stages, immersive tech showcases, and structured networking formats. From executive roundtables to live innovation challenges, every element is designed to spark collaboration and unlock commercial opportunity.

As the energy sector faces complex disruption, no single company or sub-sector can navigate it alone. Energy LIVE is where the full value chain comes together to find clarity, forge connections, and lead with confidence.

To find out more, head to the event website and download the brochure.

Reuters Energy LIVE, reutersevents.com/energy-live

ENERGY LIVE 2025

9-10 December | Houston, Texas | #EnergyLIVE

Six Stages. One Platform. The Complete Energy Outlook

DISCOVER THE CEOS, CTOS, AND VPS ALREADY REGISTERED DOWNLOAD THE ATTENDEE LIST

3000+ Attendees

80% Decision-Makers

150+ Expert Speakers 50% Producers & Generators

The Full Energy Value Chain, Under One Roof

DATACENTRES IRELAND RETURNS TO DUBLIN

The team at DataCentres Ireland gives a preview of what’s on show this 19–20 November at the RDS in Dublin.

Now in its 15th year, DataCentres Ireland returns to the RDS on Wednesday 19 and Thursday 20 November, uniting the Irish data centre sector under one roof.

Whether you’re designing, building, operating, or investing in infrastructure, this is the must-attend event for anyone driving innovation and growth in the sector – and registrations are now open!

A CONFERENCE THAT DELIVERS REAL INSIGHT

The multi-streamed conference programme dives into the trends, challenges, and innovations shaping the future of data centres in Ireland and beyond:

• Strategic stream: Explore the big picture — from regulatory impacts to sustainability, AI, energy strategy, and Ireland’s role in the global digital landscape.

• Operational stream: Tackle the day-to-day — discovering practical solutions to optimise your infrastructure for efficiency, resilience, and security.

To register for your free ticket, follow this link: https://datacentres-ireland.registrationdesk.ie/

CALL FOR PAPERS: GET INVOLVED!

Do you have a compelling case study, breakthrough project, or unique perspective to share with industry leaders and decision-makers?

The event organisers are building their 2025 programme now, so submit your conference paper idea and be part of the conversation: datacentres-ireland.com/conference-papersubmission-form-4/

SHOWCASE YOUR BUSINESS TO A QUALITY AUDIENCE

DataCentres Ireland 2025 isn’t just about learning; it’s about connecting. With over 120 companies exhibiting last year and thousands of high-quality attendees, this is your chance to:

• Build your pipeline

• Meet key decision-makers

• Launch new products

• Strengthen relationships

• Grow your brand in Ireland’s data centre sector

To secure your space, contact the team on +44 (0)1892 570513 or visit datacentres-ireland.com.

Don’t miss Ireland’s biggest data centre event of the year.

DataCentres Ireland, datacentres-ireland.com

SAVETHEDATE

RDS, Dublin: 19-20 Nov 2025

Infrastructure • Services • Solutions

DataCentres Ireland combines a dedicated exhibition and multi-streamed conference to address every aspect of planning, designing and operating your Datacentre, Server/Comms room and Digital storage solution –Whether internally, outsourced or in the Cloud.

DataCentres Ireland is the largest and most complete event in the country. It is where you will meet the key decision makers as well as those directly involved in the day to day operations.

EVENT HIGHLIGHTS INCLUDE:

Multi Stream Conference

25 Hours of Conference Content

International & Local Experts

60+ Speakers & Panellists

100+ Exhibitors

Networking Reception

Entry to ALL aspects of DataCentres Ireland is FREE

• Market Overview

• Power Sessions

• Connectivity

• Regional Developments

• Heat Networks and the Data Centre

• Renewable Energy

• Standby Generation

• Updating Legacy Data Centres

COOLING AND CLIMATE MANAGEMENT FOR DATA CENTRES

Hans Obermillacher, Manager Business Development at Panduit EMEA, looks at how aisle containment and active cooling systems are evolving to keep pace with the rise of AI-driven rack densities.

As digital transformation accelerates, the demand for high-density computing environments is rising rapidly. AI workloads, HPC clusters, and GPU-rich server pods are pushing rack power densities from the traditional 2–5kW to 15–30kW – and, in some cases, even beyond 50kW. For example, a typical AI rack with four Nvidia DGX H100 servers needs 40.8kW, generating a degree of heat which must be effectively extracted and removed. These high-level thermal challenges have brought liquid cooling into sharp focus for specialised, ultra-dense deployments. However, in proportional terms, the data centre market continues to be dominated by aisle-containment cooling solutions. While liquid cooling garners significant industry

attention, the overwhelming majority of current deployments still rely on scalable, containmentbased approaches for climate control. This reality underscores the fact that although liquid cooling is expanding, it remains a niche complement to the far larger footprint of containment cooling.

According to Barbour ABI research, European data centre power demand is projected to grow by 70% between 2024 and 2030, with the UK alone planning around 95 new projects in the next 12 months. Climate management must balance innovation with scale. Aisle containment remains the backbone of data centre compute thermal strategies today, while liquid cooling is emerging as a targeted solution for the highest-density AI workloads.

For the UK data centre market, climate management must continue to address three core goals:

1. Thermal Efficiency – maximising cooling with minimal energy use

2. Reliability – eliminating hotspots that risk server performance

3. Scalability – adapting seamlessly to higher rack densities and future demand

Data centre infrastructure providers can address these three requirements through a comprehensive portfolio of physical infrastructure solutions specifically engineered for thermal optimisation. These solutions span the entire cooling ecosystem, including Passive Cooling Optimisation (Containment Systems), Active Cooling Systems (In-Row Cooling, Rear Door Heat Exchangers), Environmental Monitoring and Control, as well as Cable and Pathway Management for Improved Airflow. Each element contributes to a unified thermal strategy that maximises efficiency and uptime.

PASSIVE COOLING: INTELLIGENT AIRFLOW MANAGEMENT

A core approach is through passive cooling capability, which is usually an airflow containment system, designed to keep cool air and hot air in separate zones and to eliminate hot air recirculation and cold air bypass – two major sources of cooling inefficiency in legacy and modern data centres alike.

There are a variety of manufacturers for modular cold aisle containment (CAC) and hot aisle containment (HAC) systems which are ideal for retrofits or new builds. These systems reduce energy costs by as much as 30%, support rack densities up to 30kW per cabinet, and require operators to ensure supplied systems are compliant with ASHRAE TC9.9 guidelines. A Tier III data centre in Manchester using Panduit’s CAC solution increased cooling efficiency by 25%, while also freeing up floor space for additional server pods.

ACTIVE COOLING FOR HIGH-DENSITY APPLICATIONS

Passive methods are often insufficient for modern, compute-intensive deployments. For rack densities exceeding 30kW, or for tightly packed GPU-based superpods, active cooling becomes critical.

Rear door heat exchangers (RDHx), for example, are compatible and simple to deploy within a variety of cabinet frames, offering up to 75kW of heat removal per rack. The latest designs provide 100% heat removal at rack level and, depending on the scale of the deployment, can eliminate server room cooling infrastructure, which reduces capital expenditure and ongoing operational costs.

Operators should ensure the chosen system provides integration with the building’s chilled water systems – something essential to RDHx operation. This fault-tolerant system design, typically an N+1-based redundant solution, can provide near-silent, fan-assisted, or passive options depending on the operators’ requirements.

In addition, in-row cooling units reduce air travel distance, increasing cooling efficiency. They are modular, scalable, and can be used with variable-speed fans and intelligent controls. These solutions target cool air to individual server racks directly, rather than the entire room. This improves efficiency by cooling the source of the heat generation in the server and extracting the heat while preventing the mixing of hot and cold air.

The key benefits of in-row cooling infrastructure include a reduction of overall cooling energy consumption, improved server cooling (which prevents component overheating, maximising performance), and flexible deployment (which can adapt to various rack configurations).

EFFECTIVE MANAGEMENT IS KEY

Effective cooling relies on real-time environmental monitoring systems that allow operators to measure temperature and humidity at the top, middle, and bottom of each rack. They can also monitor differential air pressure, leak detection, and secure door access. These solutions allow data centre managers to automate cooling adjustments, reduce energy consumption, and improve SLA adherence.

Often overlooked, but another critical factor in data operations is cabling and pathway design, which has a direct impact on airflow

and cooling. Optimised cabinets direct airflow paths to maximise cooling, while specifically designed overhead fibre and copper pathways remove cable clutter from underfloor air plenums and high-capacity vertical cable managers to reduce congestion and thermal blockages.

FUTURE DEVELOPMENTS

With net zero targets shaping UK infrastructure investments, data centre solutions must support sustainability by implementing modular systems that reduce waste during upgrades, lower energy consumption through efficient airflow and targeted cooling, and reuse components to enable greener refresh cycles.

In an era defined by data-intensive applications and ever-rising server rack power density, thermal management is no longer an operational afterthought; it is now central to business continuity, energy efficiency, and service scalability. Companies like Panduit offer end-to-end portfolios, combining containment, active cooling integration, environmental monitoring, and intelligent cable management.

Whether deploying compact edge clusters in Birmingham or GPU-based AI superpods in London Docklands, modern infrastructure provides the flexibility, reliability, and performance needed to meet today’s and tomorrow’s thermal demands.

Panduit, panduit.com

cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Next-generation cabinet with custom configurations to meet your network needs.

Scalability

Scalability

Configuration platform to create cabinet to meet your specification and requirements

Configuration platform to create cabinet to meet your specification and requirements

Scalability

Configuration platform to create cabinet to meet your specification and requirements

Ease of Use

Ease of Use

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Ease of Use

Easy to adjust E-rails, PDU installation and accessories to reduce installation time

Easy to adjust E-rails, PDU installation and accessories to reduce

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Maximum Cooling 80% perforated doors for increasing power density in newer deployments

Universal design for server or network applications

Flexibility

Universal design for server or network applications

Flexibility

Universal design for server or network applications

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Integrated Intelligence Turnkey deployment with pre-configured PDU, access control and environmental monitoring

Robust

Solid construction with Best-in-Class load rating for secure installation

Robust

Solid construction with Best-in-Class load rating for secure installation

Robust Solid construction with Best-in-Class load rating for secure installation

Security

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Security

Security

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Choice of key lock, 3-digit combo or HID electronic and/or keypad lock.

Enhanced Cable Management

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Enhanced Cable Management

Enhanced Cable Management

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

Choice of tool-less fingers, vertical cable manager and front-to-back cable manager

MILLIONS OF VOLTS IN MILLISECONDS: THE OVERLOOKED RISK TO DATA CENTRES

Robin Earl, Market Development Manager at DEHN UK, explains why lightning and surges are on par with cyberattacks in terms of their risk for data centres.

Damaged servers, data loss, downtime. For data centre operators, it’s a clear-cut case: a cybersecurity attack. However, hackers are not always to blame for this kind of nightmare scenario. Natural forces are also capable of bringing critical infrastructure to its knees.

With an electrical discharge that releases millions of volts in just fractions of a second, lightning is both a physical phenomenon and an example of the destructive power of nature in highly developed infrastructures. Data centres are particularly at risk because of their sensitive electronics.

To minimise the risk of data loss, downtime, and costly repairs, targeted investment in a comprehensive lightning and surge protection

concept is essential. In particular, the lightning protection zone concept is considered the benchmark for maximum safety and efficiency.

THE UNDERESTIMATED THREAT POSED BY LIGHTNING AND SURGES

The impact of lightning and surges represents a significant yet often underestimated risk factor. A direct lightning strike to a building can seriously damage its structural integrity and, under certain circumstances, even spark a fire. Indirect strikes often have an equally destructive effect. Although the energy is primarily dissipated via the earth, a strong electromagnetic field is created that couples into conductive structures. The closer an

electrical system is to the point of impact, the stronger the interference. Even at greater distances, so-called coupling can occur and penetrate sensitive servers via metallic lines –such as power or data cables – causing sudden voltage spikes that significantly exceed the load limits of the electronic components.

A single surge impulse triggered by lightning is therefore enough to damage or completely disable critical components such as servers and network infrastructure in a data centre. In view of the high demands on availability and data security, it is therefore advisable to invest in a lightning protection zone concept at the planning stage.

COMPREHENSIVE PROTECTION BEGINS WITH PLANNING

To minimise risks from lightning and surges and to provide comprehensive protection for data centres, safety must be considered from the outset and on an ongoing basis. As some protective measures need to be considered and integrated right from the start of construction, it is crucial that planners and building owners address the issue at an early stage. Failure to implement suitable measures in good time can quickly lead to considerable costs. Although certain protective measures can be retrofitted, such interventions are usually technically challenging and involve a great deal of time and effort.

DIN/BS EN 50600 forms the basis for the planning, construction, and operation of data centres. As the first standard valid throughout Europe, it takes a holistic approach that also includes aspects of lightning protection. In this context, it refers to the complete BS EN IEC 62305 series of standards, which serves as the authoritative basis for the standard-compliant design of lightning and surge protection concepts and shielding measures.

The early involvement of lightning protection experts is essential for the standard-compliant implementation of BS EN IEC 62305. They provide professional and structured support throughout the entire process – from the initial analysis to the final implementation.

It is important to dispel a widespread misconception that lightning protection systems are just lightning conductors attached to the outside of a building. Quite the contrary, effective protection comprises a combination of external and internal measures that together ensure the safety of people and equipment

FROM THE RISK ANALYSIS TO THE LIGHTNING PROTECTION ZONE CONCEPT

A project usually begins with a risk analysis in accordance with BS EN IEC 62305-2. Potential hazards such as lightning strikes or surges are assessed and a corresponding lightning protection class assigned. The aim of the risk analysis is to assess the probability of occurrence and the potential impact of such events. Based on this assessment, a decision is made as to which protective measures are necessary – if any – taking into account technical, legal, and economic aspects.

Part 3 of BS EN IEC 62305 focuses on physical damage to structures and life hazards. It defines the parameters and planning for external lightning protection to safely and efficiently intercept lightning strikes and conduct the lightning current to the earthing system via the down-conductor system.

The fourth and final part deals with the selection of individual measures to protect integrated electronic systems. The lightning protection zone concept – in accordance with BS EN IEC 62305-4, for example – is used here as a benchmark for maximum safety.

It divides a building into areas with different risks of lightning damage. From the outside in, protective measures such as surge arresters, equipotential bonding, and spatial shielding are used to gradually reduce the energy of a lightning strike and protect sensitive technology such as servers.

SAFETY FROM A SINGLE SOURCE

Lightning and surge protection is a highly specialised field and a complex process that requires close cooperation between experienced experts and customers. When looking for a suitable service provider, it is advisable to use a full-service provider who offers specialised services such as risk analyses and the development of optimally coordinated system solutions consisting of earthing, lightning protection, and surge protection.

On the other hand, distributing the process amongst several service providers can mean losing sight of the overall planning and runs the risk of construction measures not being implemented in accordance with standards.

As the backbone of digitalisation, data centres are becoming increasingly important. That is why they need an effective and reliable lightning protection system that ensures long-term operational safety and prevents downtime. Given that the frequency and intensity of lightning strikes is set to increase in the future for reasons of climate change, it is now more important than ever to invest in a comprehensive protection concept at an early stage. After all, in the event of an incident, the potential costs far exceed the expenditure on preventive safety measures.

PEAK PERFORMANCE FOR OPTICAL FIBRE NETWORKS

AFL has launched DENALI, a modular optical fibre platform built for high-density GPU and AI-driven environments. Designed to reduce downtime and accelerate deployment, DENALI enables data centres to scale seamlessly as network demands grow.

The platform features advanced rack-mount hardware, cassettes, and pre-terminated, customisable assemblies, delivering up to 288 LC duplex ports (576 fibres) in just 4RU. Supporting speeds from 10GB to 800GB and beyond, it is optimised for hyperscale and AI workloads.

DENALI simplifies deployment by reducing the number of components needed, streamlining inventory management, and lowering failure points. It integrates easily with existing infrastructure – cutting disruption during expansion – and includes enhanced fibre management for improved network reliability.

“AI-driven densification is transforming […] fibre deployment,” says Marc Bolick, President of Product Solutions at AFL. “The DENALI platform was developed in response to this shift of handling faster scaling, reduced downtime, and solid reliability that AI workloads actually need.”

AFL, aflglobal.com

GF LAUNCHES ALL-POLYMER DLC QUICK CONNECT

GF has introduced the Quick Connect Valve 700, the first all-polymer quick connect valve for direct liquid cooling (DLC) in data centres. Weighing over 50% less than metal alternatives, it delivers up to 25% better flow, improved handling, and enhanced operator safety.

Based on GF’s Ball Valve 546 Pro platform, the PVDF dual ball valve features a patented dual-interlock lever, allowing disconnection only when both sides are closed. This design minimises fluid loss, reduces accidental disconnection risk, and ensures long-term reliability.

Ideal for high-density, high-performance computing, the valve is corrosion-free, UL 94 V-0 rated, and has an expected service life of at least 25 years. Its full-bore design reduces pressure drop and supports efficient coolant flow between cooling distribution units and server racks.

An Environmental Product Declaration, verified to ISO 14025 and EN 15804, provides transparency on the valve’s environmental impact.

GF, gfps.com

KIOXIA’S 245.76 TB NVME SSD BUILT FOR GENERATIVE AI

Memory manufacturer Kioxia has expanded its LC9 Series enterprise SSD line-up with the launch of the industry’s first 245.76TB NVMe SSD, available in 2.5-inch and EDSFF E3.L form factors. Designed to meet the demands of generative AI, the drive complements the previously announced 122.88TB model.

Featuring a 32-die stack of 2Tb BiCS FLASH QLC 3D flash memory with CMOS Bonded to Array (CBA) technology, the LC9 delivers the capacity, speed, and efficiency needed for large language model training and vector-based inference.

The LC9 Series enables dense storage in compact formats, significantly reducing power usage, cooling demands, and total cost of ownership by replacing multiple HDDs. The drives support PCIe 5.0, NVMe 2.0, and OCP Datacenter

specs, with added features such as Flexible Data Placement and multiple security options, including CNSA 2.0-ready encryption.

Sampling has begun with select customers.

Kioxia, kioxia.com

SCOLMORE LAUNCHES IEC LOCK C21 LOCKING CONNECTOR

Scolmore, a manufacturer of electrical wiring accessories, circuit protection products, and lighting equipment, has expanded its award-winning IEC Lock range with the addition of a new C21 locking connector, compatible with both C20 and C22 inlets.

Featuring an innovative side button release, the IEC Lock C21’s secure design offers extra protection against accidental disconnection, making it an ideal choice for applications where reliability is essential.

Designed to handle the heat, the C21 is a durable, lockable connector built to protect appliances that are sensitive to vibration against power loss. The product is particularly suited as a versatile solution for data centres, servers, and other industrial equipment where maintaining the proper device temperature is critical to operational success.

More information can be found on the IEC Lock website (iec-lock.com), which showcases the entire range of locking connectors.

Scolmore, scolmore.com

HARTING’S HAN PROTECT BOOSTS UPTIME

HARTING has introduced the Han Protect connector, designed to increase system availability in data centres by simplifying fault detection and reducing downtime.

Built in the Han 3A format, the connector integrates an M12A-coded five-pole interface with

a 5x20 mm miniature fuse. In the event of a short circuit, the fuse quickly interrupts supply to connected units, while a red LED clearly indicates the fault. This enables rapid, tool-free replacement without opening the control cabinet.

By externally mounting the housing, Han Protect saves up to 30% of internal cabinet space, eliminating the need for extensive fuse terminal blocks. It protects control units while allowing connected systems to restart quickly, cutting mean time to repair and improving maintenance efficiency.

Han Protect is ideal for building automation systems managing HVAC, power, IT, and security in large facilities.

HARTING, harting.com

DANFOSS COMPLETES UQDB CONNECTOR RANGE

Danfoss Power Solutions has launched its -08 size

Hansen Universal Quick Disconnect Blind-Mate (UQDB) connector, completing the product line for data centre liquid cooling. The new half-inch model is Open Compute Project (OCP) compliant and delivers a 29% higher flow rate than OCP standards, boosting efficiency in high-density liquid-cooled racks.

Featuring a flat-face dry break design, the UQDB prevents spillage during connection or disconnection. Its push-to-connect, self-aligning mechanism offers radial compensation of ±1 mm, simplifying in-rack connections where access is limited.

Made of 303 stainless steel with EPDM seals, the couplings provide corrosion resistance, wide fluid compatibility, and secure ORB connections. Every unit undergoes helium-leak testing and includes QR codes for lifecycle tracking.

“The now-complete UQDB range expands our portfolio of thermal management products, enabling comprehensive, reliable cooling systems,” comments Chinmay Kulkarni, Data Center Product Manager at Danfoss.

All UQDB couplings are available globally.

Danfoss, danfoss.com

Is your Data centre ready for the growth of AI?

Partner with Schneider Electric for your AI-Ready Data Centres. Our solution covers grid to chip and chip to chiller infrastructure, monitoring and management software, and services for optimization.

Explore our end-to-end physical and digital AI-ready infrastructure scaled to your needs.

se.com/datacentres

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.