





Exceptional overload capacity: Up to 150% for 60 seconds for unmatched reliability.
Grid resilience: Absorbs voltage fluctuations with extended input range.
Scalable. Efficient. Reliable. Future-proof your AI infrastructure with StratusPower™.

Exceptional overload capacity: Up to 150% for 60 seconds for unmatched reliability.
Grid resilience: Absorbs voltage fluctuations with extended input range.
Scalable. Efficient. Reliable. Future-proof your AI infrastructure with StratusPower™.
8 • AI
Investing in more and more GPUs may not be the best solution for those looking to adopt an efficient AI solution, as Spirent’s Mark Bateman explains.
•
How can you secure a hybrid cloud environment in this cloudnative era? Aqua Security’s Rani Osnat shares his top tips.
•
Adrian Mountstephens from Equinix explores the critical steps financial services must take to comply with the EU’s Digital Operational Resilience Act.
20 • Power
Are immediate power solutions overcoming the limitations that data centres have found from traditional energy storage? ZincFive’s Tod Higinbotham explores more.
24 •
Data centres need to balance growth with sustainability, and to achieve that balance, Amit Sanyal from Juniper Networks has some top tips to share.
Welcome to the first of two issues of Data Centre Review in 2025, and hello to everyone who follows our work – whether it’s reading these magazines, engaging with our digital platforms, or attending our numerous events and webinars throughout the year.
You might notice something slightly different about this Editor’s Comment, though you may not be able to put your finger on it right away – so let me clear things up. Kayleigh Hutchins, who has been Editor of Data Centre Review for the past few years, has sadly departed to tackle a whole new and exciting challenge in the charity sector. While we wish her the best of luck with her new endeavour, she’s left some pretty big shoes to fill.
That’s where I come in. For those of you who have read our sister publication, Electrical Review, I may already be familiar, after all, I’ve been Editor over there since 2021. But now, I’m also bringing Data Centre Review under my wing as I seek to nurture the brand and focus it on serving you – our readers – during this period of explosive growth.
So, what’s set to change? While I can’t tell you everything that we’re cooking up, after all we’ve got to have some surprises, one thing that will be key going forward is you – the data centre industry.
Our plan is to put you front and centre, giving you a platform to share ideas, discuss the direction of the industry, and debate the topics that matter the most. That means you’ll be seeing a diverse range of voices from across the industry writing for Data Centre Review, sharing their opinions, and helping spark new conversations.
After all, this is a pivotal time in data centre history. AI is sparking ever-growing interest in the sector, while the amount of money being poured into developing new data centres is reaching astronomical sums that were hard to imagine even five years ago.
That means there’s plenty to talk about, and we want to be the platform that facilitates that industry discussion. So, if you ever want to share your thoughts, we’re all ears. Let’s get talking. You can reach me at jordano@sjpbusinessmedia.com.
For now though, enjoy this issue of Data Centre Review.
Jordan O’Brien, Managing Editor
MANAGING EDITOR
Jordan O’Brien
jordano@sjpbusinessmedia.com
DESIGN & DIGITAL PRODUCTION
Rob Castles
robc@sjpbusinessmedia.com
BUSINESS DEVELOPMENT MANAGER
Tom Bell
+44 (0)7741 911 317 tomp@sjpbusinessmedia.com
GROUP COMMERCIAL DIRECTOR
Fidi Neophytou
+44 (0) 7741 911302
fidin@sjpbusinessmedia.com
MARKETING MANAGER
Emily Szweda
emilys@sjpbusinessmedia.com
PUBLISHER
PRINTING BY Buxton
Subscription enquiries:
Data Centre Review is a controlled circulation magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please contact: subscriptions@datacentrereview.com
Data Centre Review is published by
The Galaxy VXL 3-phase UPS delivers maximum power density while supporting the scalability required for evolving AI workloads:
• Modular design for flexible scaling up to 1250 kW
• Live swap technology for zero-downtime module replacement
• Industry-leading power density with 70% space savings
Visit our stand at Data Centre World London to see our latest innovation - The Galaxy VXL: the most compact and powerful 3-phase modular UPS available.
Be an Impact Maker
Chris Cutler of Riello UPS explores how the next generation of ultra-high efficiency modular UPS will help data centres balance sustainability with reliability in the age of generative AI.
It’s been hard to keep data centres out of the mainstream news headlines in recent months.
Autumn saw the industry added to the list of UK Critical National Infrastructure alongside sectors such as hospitals, energy, and transport.
While 2025 started with Government Ministers adopting all 50 of venture capitalist Matt Clifford’s recommendations in its AI Opportunities Action Plan to boost digital infrastructure, including relaxing planning rules and creating AI Growth Zones to accelerate data centre building.
That’s before we even mention the £25 billion of private sector investment in new UK data centres announced since last summer.
But while this feel-good factor is undoubtedly fuelled by the rapid rise of generative AI, the phenomenon poses as many challenges to us in the industry as it offers opportunities.
Training generative AI models requires
massive processing power compared to traditional cloud computing, which in turn puts increased pressure on data centre infrastructure and power grids. AI also changes a data centre’s load profile, with sudden spikes in power to deal with fluctuating user demand.
Then there’s the whole question of environmental sustainability, which has already long been used as a stick to beat the sector with.
International Energy Agency figures suggest that global data centre power consumption will more than double between 2022 and 2026 from 460 TWh to more than 1,000 TWh, which is equivalent to the power consumption of Japan.
As a UPS manufacturer, this poses us the question of what we can do to help data centres become more sustainable without taking our eye off the primary function of providing reliable protection to their critical loads.
Exploring alternative technologies
As part of our ongoing product development
process, we spoke to several data centre operators to understand their concerns. The high energy costs of running a data centre were a recurring theme – typically around 60% of a data centre’s total operational costs are linked to energy, with a further 30% attributed to operational and maintenance costs, so those were the key areas to prioritise.
Our extensive R&D also highlighted the potential of using silicon carbide (SiC) semiconductors rather than the IGBT components typically used in making UPS. Of course, SiC is nothing new, it’s widely used in the electric car industry. But for UPS manufacturers, it offers several advantages compared to IGBT.
SiC exhibits lower electrical resistance, which results in reduced energy losses and higher efficiency. It also delivers increased power density, can operate at higher temperatures, has faster switching capabilities, and is also more durable than IGBT, leading to extended component life cycles.
Finally, silicon carbide’s performance is consistent across the entire load, making it ideal for UPS applications. This is especially important with AI load profiles as it can maintain high levels of efficiency even as the demand rapidly changes.
It is important to acknowledge that silicon carbide components do require 4-6 times the energy to manufacture, which also increases the production costs and amount of CO2 produced. Thankfully though these issues are more than offset by the overall savings across the UPS’s entire life cycle.
While SiC is by no means a silver bullet, trends are heading in the right direction and its viability will only improve as volumes increase, more component types are developed, and higher power ratings are brought to market.
Embracing this potential of silicon carbide semiconductors led us to Multi Power2, the evolution of our modular UPS. The series comprises the 500 kW MP2 and the scalable M2S, which comes in 1000-1250-1600 kW models. Up to 4 of these UPS can be paralleled together, meaning the MS2 can protect data centres needing up to 6.4 MW power in a single system.
Like all modular solutions, Multi Power2 offers risk-free ‘pay as you grow’ scalability by adding in extra power modules as and when required, reducing the threat of wasteful oversizing at initial installation.
Both models are based on 3U 67 kW high density power modules that enable the UPS to deliver market-leading ultra-high efficiency of up to 98.1% in online mode, ensuring maximum protection to the critical load whilst minimising operating costs and energy losses.
Thanks to the high-performing characteristics of SiC, Multi Power2 achieves high efficiency across all load levels. For example, between 23-47% the system operates at above 98%. If we widen the load level to between 15-80%, the UPS still achieves efficiency above 97.5%, while it maintains above 97% at load levels of 10-100%.
A quick point regarding this efficiency curve. Most UPS manufacturers use an ‘average efficiency’ figure, which is based on a mix of operating modes over a set period.
For example, it might cover 24 hours split between 12 hours in online mode followed by 12 hours in ‘ECO’ or economy mode, where
an efficiency boost comes with a trade-off of reduced protection.
Most data centres run their UPS permanently in online mode to ensure maximum protection, so these ‘average efficiency’ figures are not a true representation of their real-world conditions. With Multi Power2, the 98.1% is in true online UPS mode so there’s no need to use average efficiency to artificially inflate the figure.
So that’s sustainability covered, but what about the other side of the coin – reliability? We’ve already touched on the inherent qualities of silicon carbide, such as its durability.
If we briefly turn to the manufacturing process, Multi Power2 is designed to avoid any single point of failure across the entire unit down to the internal communication structure, which is completely redesigned with two separate fully redundant high-speed buses.
The power modules are constructed with only a few connection cables to remove potential points of failure, while the overall component count is low, reducing the likelihood for any breakdowns.
Multi Power2 delivers data centres cost, energy, and carbon savings.
The modules are also mechanically and electrically segregated.
All modules and major components are hotswappable; engineers can easily swap a module (or add another to boost capacity) in less than 5 minutes.
Multi Power2 delivers data centres cost, energy, and carbon savings. If you deploy its 98.1% efficiency instead of a 96% UPS, the savings are unquestionable. If you’re talking about a 3.2 MW UPS, you’d be looking at an annual energy saving of more than 840,000 kWh, which equates to around £133,000. Over the 15-year lifespan that’s potential savings of up to £2.6 million.
In terms of environmental savings, the higher efficiency UPS will save more than 196 tonnes of CO2 per year, around 2,400 tonnes over the course of the system’s lifespan.
As well as these cost and carbon savings, the long-life components integral to Multi Power2 deliver other benefits too.
With most UPS you will need to replace the capacitors between years 5-7 of service life, so during a typical 15-year lifespan you’d be looking at two or potentially three swap outs. However, with Multi Power2 it is realistic to go through the entire lifespan without replacing capacitors at all, a huge saving in maintenance costs plus fewer end of life materials that require recycling.
All in, a typical data centre could save £80,000-£120,000 in maintenance over the UPS’s lifespan, a figure that doesn’t even consider other environmental benefits, such as reducing the number of miles engineers will travel because of fewer required maintenance visits.
Riello UPS is at Data Centre World 2025 at ExCeL London on 12-13 March, with the team available on stand DC340 to showcase its range of data centre solutions. Register for a FREE visitor pass at https://www.datacentreworld.com/ RielloUPSLtd
Aoptimise AI’s efficiency while maintaining performance.
I might just be the most exciting technology of the last few years. Generative AIs like ChatGPT are already transforming a variety of sectors and many industry leaders promise more transformative possibilities to come as companies like OpenAI work towards more powerful applications of AI and Machine Learning.
The excitement around this technology has been heated. According to Employ America, a macroeconomic policy research firm, AI investment made up around 20% of GDP growth in the third quarter of 2024 and is expected to quickly outpace the Dotcom boom of the early 2000s.
Private and public sectors are scrambling to take advantage of this emerging technology. Microsoft plans to invest $80 billion in AI data centres in the coming fiscal year, while the UK government has pledged £14 billion to encourage the development of AI data centres across the country.
centres – which consume huge amounts of energy, largely due to design inefficiencies which waste power and resources.
For all of AI’s capabilities, carrying out simple tasks comes with a heavy energy bill. The IEA’s data draws a stark comparison between the energy consumption of a non-AI request with that of a ChatGPT request. While a single Google search consumes 0.3 Wh of electricity, a ChatGPT request takes 2.9 Wh – nearly 10 times that of the Google search. Another study from Carnegie Mellon University, shows that generating an original image using AI could take up as much energy as fully charging a mobile phone.
According to the International Energy Agency (IEA), data centres consumed around 2% of global energy in 2022. The agency expects that the number could double by 2026, largely driven by developments like AI.
AI data centres’ energy demands are increasingly so large that many places struggle to meet them, or buckle under their weight. In fact, a new report from the North American Electric Reliability Corporation (NERC) says that this surge in data centre use could start causing outages across the US and Canada in 2025, as data centres double their energy expenditure to accommodate the intensification of AI use.
that companies will try to develop a technology and throw caution and consideration to the wind
The demands on the data centres that house these AIs are growing rapidly, promising greater energy inefficiency and higher OpEx costs even as they grow in computing power. AI models are growing a thousand times more complex every three years; cluster size is quadrupling every two years and traffic is growing by 10-fold every two years. That inefficiency has to be tackled if AI data centres are to cut down on waste, expenditure and ultimately produce better AI services.
GPUs won’t solve the problem
AI services consume so much energy mostly due to their interactions with huge databases of information, which are required for training and inference. Most of the energy is spent by moving those large amounts of data from computing to memory across countless compute resources and accelerators. Graphics processing units (GPUs) have become the indispensable resource that makes that possible. Still, the fast growth
of traffic and processing requirements of these AI data centres are now pushing the limits of their networks.
Under that pressure, many will merely opt to throw more GPUs at the problem. While this will allow greater capacity and computational power, it also grows the amount of energy used, the amount of cooling required and the OpEx sunk into these projects.
Above all, it’s a highly inefficient solution and does not resolve the broader inefficiencies already present without those data centres. Network bottlenecks, for example, are common in these data centres. This ultimately results in GPU clusters remaining idle and underused 30%-80% of the time. On top of that, packet loss due to those bottlenecks can have outsized effects on GPU performance, which can degrade their performance by 30% from only 1% packet loss. The irony here is that loss in performance could drive those AI data centres to purchase more GPUs to accommodate for those bottlenecks. The inefficiencies within these data centres don’t just add up to wasteful use of power but wasteful use of existing resources, mounting OpEx, greater latency and ultimately, diminished quality.
It’s important to note that while the huge energy consumption of AI data centres is a problem, it hasn’t been enough of a problem to dissuade many from pursuing ever-more expensive and inefficient projects. This isn’t merely a question of lowering energy bills but producing a better, faster, more powerful AI.
AI hopefuls need to get ahead of the problem. They can do that by establishing robust testing capabilities in order to ensure they roll out the most efficient, powerful and competitive AI services possible.
That said, building a testing lab full of GPUs will be difficult, if not impossible, for many. Instead, a virtualised test lab can help them make the necessary efficiency improvements. AI workload emulation tools allow a given operator to simulate the various network conditions that they will likely undergo in the ‘real world’.
Critically, this will allow them to identify potential network bottlenecks and points of energy waste long before they ever suffer them in deployment. This will allow them to reconfigure data centre designs for maximum efficiency and performance. It will also allow them to accelerate time to market, which is becoming another key differentiator in this heated contest.
Simply, AI data centres are far too inefficient. Many think of that as an acceptable cost that they can merely throw more money or hardware at. That won’t save them. Even if they can absorb the OpEx costs and potential regulatory penalties that come with that inefficiency – it will ultimately mean that their AI projects suffer as will their ability to offer a competition-beating AI service.
The rush to adopt often means that companies will try to develop a technology and throw caution and consideration to the wind. This might ultimately be a quicker approach in such a heated competition, but problems will invariably emerge later and the AI network will suffer as a result. GPUs offer a quick way to scale an AI project, but one that wastes more energy and will ultimately hamstring the final quality of that project. Those that want to develop their own AI projects need to think carefully about the way they marshal their limited resources or risk producing an inferior AI.
DRamzi Charif
, VP Technical Operations, EMEA,
VIRTUS Data Centres, discusses how the integration of AI is poised to optimise and elevate data centre operations for the digital age.
ata centres remain the backbone of today’s digital infrastructure, crucial for supporting the world’s growing appetite for cloud services and big data. Today, AI is firmly in the mix and much has been discussed about how data centres are going to be able to cope with the surge in demand for compute power thanks to the increase in AI-powered applications. However, how will data centres themselves be using AI to optimise and transform operations? This integration promises to redefine efficiency, enhance performance, and maintain competitiveness in a market where agility and operational excellence are paramount.
Historically, data centres have been slow to adopt new technologies internally, often due to concerns around reliability and the high stakes associated with maintaining uptime. While they have supported cutting-edge services for their clients, their internal processes have lagged behind. The integration of AI, however, offers data centres an opportunity to overcome this reluctance, as its benefits are becoming too significant to ignore.
AI’s role in modern data centres extends far beyond operational automation; it provides intelligent analytics and predictive insights that
enable facilities to function at peak efficiency. The days of relying solely on manual processes and human monitoring are waning, as AI offers continuous oversight, efficient energy management, and the ability to foresee and mitigate issues before they escalate.
One of the most energy-intensive processes in any data centre is cooling – accounting for up to 40% of all energy costs. Traditional methods have relied on fixed temperature settings and systems that cannot easily adapt to fluctuating loads or environmental changes. AI-driven cooling solutions, however, are changing the landscape. By using machine learning algorithms to monitor and respond to temperature variations in real-time, AI systems optimise cooling dynamically, ensuring that resources are allocated where they are needed most.
For example, AI can adjust fan speeds, airflow direction, and coolant levels based on real-time data about server loads and external weather conditions. By integrating AI, data centre operators can not only reduce operational expenses but also enhance system reliability by maintaining consistent environmental conditions, thereby preventing equipment overheating and failure
Continuous monitoring and predictive maintenance are two areas where AI is proving particularly effective. Data centres require 24/7 surveillance of key metrics, such as temperature, power consumption, and network performance. AI systems provide this oversight without the need for human intervention, identifying potential anomalies before they lead to critical issues.
AI’s ability to analyse vast amounts of data quickly means it can detect even minor deviations from normal performance. For instance, if cooling
equipment starts to operate outside its optimal range, AI systems can alert operators and recommend preventative actions, such as scheduling maintenance during off-peak times. This proactive approach reduces the likelihood of downtime and extends the lifespan of critical infrastructure components
Workflow automation: efficiency and consistency
AI isn’t just about reducing risks; it’s also about streamlining daily operations. Routine tasks, such as performing equipment checks, monitoring systems, and reporting, can now be automated through AI. This automation ensures consistency and reduces the risk of human error. It also frees up skilled personnel to focus on strategic tasks that add value, such as improving client services or implementing new technological solutions.
By automating these routine functions, AI helps data centres operate more smoothly, improving overall uptime and efficiency. This shift towards AI-enhanced workflows enables data centres to maintain high service levels while managing growing demands without increasing staffing levels
Despite AI’s clear advantages, integrating it into existing data centres is not without challenges. Many facilities operate using legacy systems that lack the capability to support advanced AI solutions. Upgrading infrastructure to incorporate AI capabilities can be costly, and there is often a learning curve as staff adapt to new technologies.
To overcome these barriers, data centres are increasingly adopting hybrid models that combine traditional systems with AI enhancements. For example, existing sensors and control systems can be supplemented with AI algorithms to provide advanced monitoring and predictive capabilities. This approach allows data centres to harness the benefits of AI without the immediate need for a full-scale infrastructure overhaul.
AI offers continuous oversight, efficient energy management
Starting with pilot projects is another effective strategy. By implementing AI solutions in non-critical areas, such as secondary cooling systems or backup power management, data centres can test and refine these technologies in a controlled environment. Once proven effective, these systems can be expanded to core operations, ensuring a smoother transition and minimising disruption.
One of the primary concerns surrounding AI implementation is the fear of job displacement. However, AI in data centres is designed to augment human capabilities rather than replace them. AI excels in managing repetitive tasks, maintaining constant vigilance, and processing large volumes of data quickly – tasks that complement the strategic skills of human operators.
For example, while AI systems can monitor power and cooling levels continuously, human expertise remains essential for decision-making and implementing new strategies. This symbiotic relationship enhances productivity, allowing staff to focus on high-value activities, such as developing customer solutions or managing complex crises.
As AI technology continues to mature, its role within data centres will likely expand. The next evolution could see AI not only managing operations but also integrating with external systems to provide real-time adjustments based on broader network and environmental data. This could include optimising power use across multiple facilities, integrating seamlessly with smart grid technologies, and even coordinating energy use with renewable sources based on availability and demand forecasts.
Looking ahead, the industry’s acceptance of AI is expected to grow, particularly as the technology demonstrates its value through pilot projects and incremental improvements. The trajectory mirrors the industry’s shift towards cloud adoption: cautious at first, but eventually widespread as the benefits become undeniable.
Ultimately, AI is set to become a cornerstone of modern data centre operations. From optimising energy use and enhancing security to automating workflows and providing predictive maintenance, AI offers a comprehensive toolkit for data centres seeking to improve efficiency and maintain competitiveness.
Louis
McGarry, Sales & Marketing Director at Centiel
UK, examines the abundance of renewable energy and the technological innovations required to effectively harness and store it.
magine a world with limitless renewable energy. One where there is enough solar, wind or wave energy to power everything we need, clean and unlimited. Well, we already have it.
IAccording to NASA, “The Sun is the major source of energy for Earth’s oceans, atmosphere, land, and biosphere. Averaged over an entire year, approximately 342 watts of solar energy fall upon every square meter of Earth. This is a tremendous amount of energy – 44 quadrillion (4.4 x 1016) watts of power to be exact.”
Further, the cost of using renewable energy has decreased significantly in recent years. For example, in the report, A new perspective on decarbonising the global energy system, “the cost of solar PV has declined by three orders of magnitude” (more than 1000-fold decrease) as it has become more widely deployed over the last 50 years – declining so much that the International Energy Agency recently declared solar PV in certain regions “the cheapest source of electricity in history.”
Space-based solar power (SBSP) is also an exciting area of development. An independent study published in 2021 by the Department for Energy Security and Net Zero and Department for Business, Energy & Industrial Strategy, revealed that space-based solar could generate up to 10 GW of electricity a year by 2050. This equates to a quarter of the UK’s current electricity demand.
The sun, however, offers only part of the energy available. Consider wind and wave energy too. Gravity is a force but not an energy so it cannot be directly harvested, but we can
see its use with hydroelectricity. Therefore, the problem is not lack of available energy, it’s actually the lack of technology to harness and store the energy efficiently.
So what does this mean for the data centre industry and uninterruptible power supplies (UPS) specifically?
In the future, we believe that UPS will need to be way more than simply power protection. UPS will transition to becoming an active energy management tool. Facilities managers will need to select the right product to store, use and harvest energy. This also relates to the type of battery and application the UPS is used for. Questions will need to be asked about what the facility wants to achieve out of the system for correct product selection.
For example, currently we are increasingly being asked about different battery options to help manage and store energy better. Peak shaving is an application example. This is a way that facilities can actively use their own energy storage to save costs during peak times of demand on the national grid.
One distillery in Scotland plans to work offgrid completely. A CHP generator, running off gas, will provide power. However, in times of peak demand the factory can peak-shave using the energy stored in its UPS batteries rather than purchasing electricity from the UK power network.
With energy demand continuing to rise, grid operators are keen on finding ways to achieve a ‘shaved curve’; and so, end users may be granted a rebate on their energy bills if they implement a peak shaving program.
For peak shaving to work successfully, the necessary technology must be included in
the UPS so product selection from the outset needs to be considered carefully. The type of UPS battery used is also critical.
Lithium iron phosphate (LiFePO4) batteries are oxygen-free and are as safe as VRLA batteries and they have the advantages of cycling ability. They also tolerate higher ambient temperatures, reducing or removing the need for cooling, and their typical useful working life of 15 to 20 years means they only need to be replaced once in a 30-year design life UPS’s lifetime.
Using LiFePO4 batteries, it is possible to reduce costs by taking some energy from the batteries instead of the National Grid during peak times in the day, for example, and recharging batteries at times of lower demand when electricity costs are less, such as at night. It would not make sense to discharge batteries completely or quickly, so to preserve battery life, small amounts of battery energy are taken simultaneously with grid sharing to shave pence off the bill which over time adds up to significant savings. Centiel’s Intelligent Flexible Battery Charging functionality, which is customer enabled, sets site specific parameters and takes place automatically.
Further useful developments in relation
to Li-ion batteries will be driven by the automotive industry.
Nickel-zinc batteries may also offer an alternative standby option for data centres. Nickel-zinc batteries can comfortably operate at high temperatures and we have a global abundance of zinc and nickel, unlike lead and lithium. Nickel-zinc batteries are also almost 100% recyclable with less need for environmental and fire suppression controls. It will be interesting to see how these technologies continue to develop, and how we can use them.
We are already working on now to help facilities harness and store energy better but what if we can also take advantage of renewable energy better too? At Centiel we have developed our multi-award winning, latest true modular UPS StratusPower’s technology so it is future ready to accept alternative energy sources. Configured correctly with LiFePO4 batteries, it has the potential to become a micro-grid or energy hub, storing and delivering energy into the facility when required. This is important in the face of unstable electrical networks and as data centres are significant power users and the potential to offer a solution to support the grid will be welcomed by energy providers
StratusPower is built with the future of renewables in mind, ensuring stability and reliability as we transition to cleaner energy sources
while representing a new income stream for users. Not only does our technology safeguard each critical element within data centres, but it harmonizes with renewable and emerging power sources.
A data centre will never outgrow a well specified StratusPower UPS, and it can be constantly right-sized to ensure it always operates at the optimal point in its efficiency curve. Further, StratusPower offers ‘9 nines’ (99.9999999%) availability to effectively eliminate system downtime, plus class-leading 97.6% on-line efficiency to minimise running costs. It’s also equipped with Maximum Efficiency Management (MEM), a user-enabled intelligent feature that saves power by matching
powered module numbers to load demand.
In the future, a DC bus can be supplied from mains power and/or renewable sources. There is little doubt that future grid instability and unreliability will need to be corrected by the use of renewables and StratusPower is ready to meet this future. StratusPower is built with the future of renewables in mind, ensuring stability and reliability as we transition to cleaner energy sources. At Centiel we are helping organisations take steps to move away from a ‘throw away’ culture with a genuinely sustainable offering to reduce total cost of ownership at the same time.
The fact is that we don’t have an energy crisis. However, those facilities unable to adapt to changes in the way we need to harness, store and manage the available alternative sources of power, will become obsolete, finding themselves out of touch in the future.
In an industry which burns large amounts of energy, we now all need to look seriously at how to take advantage of the limitless renewable sources available.
For further information about Centiel’s flexible, future-ready UPS products, please visit: www.centiel.com
TRani Osnat, SVP of Strategy at Aqua Security, explores strategies
he hybrid cloud model offers multiple benefits for organisations looking to create a flexible, scalable and cost-optimised IT infrastructure that addresses their unique business needs. By combining on-premises resources with private and public cloud services, they can rapidly provision compute resources, harness new advancements such as AI, accelerate digital transformation and more.
Organisations that adopt a hybrid cloud architecture can engage in smarter workload placement strategies that make it possible to optimise costs, performance, and compliance. It’s a best of both worlds approach that enables organisations to retain certain applications on private infrastructure due to regulatory, performance, or even cost considerations while taking full advantage of the scalability and resources of the public cloud.
That said, hybrid cloud presents some unique and complex security challenges. Most security solutions and tooling on the market were either created for on-premises infrastructure, or for public cloud – not for both. This means that organisations looking to safely harness its game changing capabilities will need to ensure they have the right hybrid cloud security practices and tools in place.
By definition, a hybrid cloud includes multiple infrastructure components and platforms. This diversity makes it difficult to
consistently enforce security best practices. It also makes it challenging to gain end-to-end visibility and control over the entire environment. But that’s not all.
Organisations that choose to maintain a mix of private and public infrastructure will often also want to use multiple cloud providers. However, while different cloud providers and private cloud platforms may offer similar capabilities, how these controls are implemented will differ.
Organisations should also be clear on how security responsibilities are divided within their own team, and between them and the cloud providers. In other words, what aspects of security are the remit of the cloud provider and where and how the organisation itself – and which team within it – is responsible for implementing additional security measures that will assure its ‘security in the cloud’.
All this can create some complex real world security challenges. These include finding that the security tools offered by cloud providers often don’t deliver equal coverage across an organisation’s entire hybrid environment. For example, the security monitoring services offered by public cloud providers typically won’t support private or on-premises infrastructure and will only provide very limited support for other cloud provider environments. This makes it difficult for organisations to monitor security threats in a consistent way.
Similarly, managing user identities and access across multiple cloud environments can also prove a complex proposition. Organisations using a public cloud provider’s authentication service for users logging into public cloud infrastructure find they have to rely on a separate authentication provider to manage access to their private infrastructure.
Securing the future requires a multi-faceted approach
The most effective approach to solving this complex issue is to find the right common denominator that will effectively reduce risk across hybrid infrastructure, in a way that doesn’t disrupt innovation, digital transformation, and rapid delivery and scale of applications.
Zero-trust architecture is such an approach. This ensures the continuous verification of users and devices, segments networks, and enforces least privilege access and application behaviour. At the same time, it does not dictate how this is done, allowing different teams and tools to adapt their own policies and controls to these principles.
Standardising security policies across public and private cloud components to prevent configuration drift and ensure consistency is another must-have. Similarly, deploying unified tools to monitor activities across on-premises and cloud systems will be vital for rapid threat detection and response.
Finally, with Gartner predicting that 99% of cloud security failures in 2025 will occur as a result of customer (i.e., user organisation) failures, organisations must take ownership of all aspects of security with the exception of physical infrastructure, which the cloud provider takes care of in the case of public cloud deployments.
Perhaps the best way of establishing a hybrid cloud model is using the cloud-native stack, and in particular containers and Kubernetes. These technologies support agile, continuous development on the one hand, and scalable, automated, and resilient deployment. But best of all, they are completely transportable and can run on-premises and in public clouds equally well.
By adopting a cloud-native security approach, organisations can ensure that applications are continuously secure, starting from the early stages of development and extending throughout the lifecycle. And this, done right, would work across all hybrid environments.
Unfortunately, security tools not born in the cloud are ill-equipped to protect applications running in the cloud and were not designed to cope with the accelerated development cycles of cloud-native applications. The good news is that the technologies being used to run the new stack, such as containers and Kubernetes, now deliver better security together with more granular visibility and automation than ever before. They also make it easier to integrate and transfer security across private and public cloud environments, provided that their in-built security controls are applied correctly.
For enterprises operating in highly regulated environments such as finance, healthcare, and government, where the utilisation of public cloud-based Kubernetes infrastructure might not be the preferred approach, this makes it possible to protect containers running on on-premises mainframes. Something that enables these organisations to securely build, run, and manage high performance container-based applications in their private clouds.
Safeguarding complex hybrid cloud environments requires a multi-faceted approach that includes consistent security policies and practices, end-to-end visibility and strong compliance and governance measures.
Given the inherent security challenges involved, it’s easy to understand why private clouds are generally considered to be easier to secure given the reduced complexity and greater control involved. However, with appropriate security controls such as robust IAM policies, encryption and continuous monitoring, hybrid clouds can achieve an equally high level of security while offering greater flexibility.
For organisations to adopt a cloud-native security mindset that encompasses everything from development through to production, and adopt policy-driven automation, maintaining a strong and consistent security posture across different clouds becomes an achievable reality.
RELIABLE BACKUP POWER SOLUTIONS FOR DATA CENTRES
The 20M55 Generator Set boasts an industry-leading output of 5250 kVA, making it one of the highest-rated generator sets available worldwide. Engineered for optimal performance in demanding data centre environments.
INSTITUTE
| ISO 8528 G3
Adrian Mountstephens, Strategic Business Development for Banking at Equinix, explores the critical steps financial services must take to comply with the EU’s Digital Operational Resilience Act.
ORA aims to fortify the financial sector against escalating risk and cost of cyberattacks and IT disruptions by imposing stricter standards on operational resilience.
The legislation was conceived in response to the financial sector’s growing reliance on digital technologies and the alarming rise in cyberattacks. For example, in 2023, cyberattacks on European financial services more than doubled, while the average cost of a data breach in the financial industry soared to €5.6 million, 28% higher than the global cross-industry average.
The scope of DORA’s mandate is broad, covering everything from regular IT risk assessments and incident reporting to rigorous testing of digital resilience. Crucially, the regulation also extends to third-party providers of information and communication technology, requiring financial institutions to exercise greater oversight and hold their ICT partners accountable for resilience. This approach aims to mitigate risks across the entire ecosystem, ensuring seamless operations despite potential disruptions.
This shift underscores the importance of robust partnerships between financial institutions and their technology providers.
Data centres are crucial in ensuring secure interconnected environments meet the highest standards for data protection and operational resilience. By ensuring their data processing needs are managed in state-of-the-art facilities, financial institutions can ensure uninterrupted service delivery, even during unforeseen disruptions.
Additionally, implementing two, or now what we are beginning to see, three mirror-image data centres facilitates rapid recovery and minimises downtime.
Only a third feel fully prepared for DORA
To meet DORA’s stringent requirements, financial institutions and their data centre providers must prioritise several key areas:
Third-party risk management
Managing third-party risks is a cornerstone of DORA compliance. Resilient infrastructure and colocation services help mitigate risks associated with external vendors, providing a stable and secure foundation for maintaining service continuity and adhering to supply chain oversight requirements.
Global interconnectivity
In today’s interconnected financial ecosystem, seamless ICT operations are non-negotiable. Secure, redundant, and multi-path interconnection across geographies ensures resilience and rapid recovery, enabling institutions to maintain critical operations regardless of location. This aligns with DORA’s emphasis on resilience and operational continuity.
Compliance-driven security frameworks
Aligning services with global security standards, such as ISO 27001, supports effective Information Security Management Systems. These frameworks not only bolster resilience but also provide a clear pathway to meeting regulatory requirements, offering institutions the tools they need to address compliance demands with confidence.
The financial services industry is no stranger to change, but the pace of technological advancement and regulatory demands is accelerating. For decades, data centre setups remained relatively static, with incremental improvements in technology. Today, we are witnessing a massive and rapid shift driven by increasing compute and power densities, power, alongside new regulatory pressures.
While the journey to DORA compliance is well underway for many institutions, the road ahead is not without its challenges. A recent McKinsey report revealed that only a third of financial institutions felt fully prepared to meet DORA’s expectations by the deadline date of January 17 last month. However, the industry’s commitment to operational resilience is clear, with many organisations dedicating significant resources to this critical endeavour.
FAshley
or businesses, there is significant potential to derive value from data. Data can enable personalised shopping experiences, improved stock management, bestseller identification, and a deeper understanding of consumer experience levels to drive a competitive edge. However, with businesses collecting more data than ever before, and the global regulatory landscape governing how data must be collected and protected, businesses are often challenged on how to effectively manage, store, and protect it in a way that optimises for broader business benefit, while at the same time remaining compliant with global regulations and instilling trust in consumers.
The security and availability of this data are paramount, which is why complex regulations like the General Data Protection Regulation (GDPR), the European Union’s Network and Information Security Directive (NIS2), and many other European regulations governing how data must be protected exist. Businesses must ensure compliance with these regulations, while balancing it with the fast-paced demands of digital commerce requires them to choose a solution that supports both operational needs and regulatory obligations. In this regard, a data centre in Europe is a strong asset.
In light of recent high profile cyber attacks, consumers are naturally concerned about how their data is managed. This comes as no surprise as fraudsters continue to target consumer data, including their loyalty points. Such attacks can at best negatively affect the brand’s reputation and consumer relationships, and at worst lead to serious repercussions for the consumer.
To mitigate this risk, GDPR places specific requirements on what
data can be collected, how it can be stored and processed, when it should be deleted, and requires specific risk assessments. This has encouraged businesses to adopt a stringent, robust framework that increases data protection for EU subjects. As we have seen with rising fines from noncompliance, this should be an absolute priority for businesses.
Here, a local EU data centre will facilitate easier compliance with local regulations, mitigating the risk of accidental breaches and lightening the workload for businesses. Keeping data in the EU through a regional data centres enables businesses to comply with GDPR guidelines, provides additional data security measures, track and maintain data logs, and simplify the required data auditing processes.
Beyond regulatory compliance, global companies face operational risks as data crosses borders and can become more vulnerable to interception. As UK and EU regulatory bodies demand greater transparency and control from businesses, they can benefit from a more centralised and secure data management approach.
An EU data centre enables European companies to closely oversee data handling, storage, and security. This reduces the compliance risks associated with transferring data in and out of the EU. A well-secured data infrastructure not only protects sensitive information but also supports continuous business operations.
As businesses strive to provide the most convenient and seamless customer experiences, a slick checkout experience for consumers is a must. This is where minimised latency in authorising transactions plays an important role. Recent research from Forter found that 77% of UK respondents are likely to abandon their online shopping basket if the process is too difficult or time-consuming. Lagging checkout times may affect the consumer’s decision to return to the retailer, affecting brand reputation and revenue, whereas a smooth and quick checkout experience increases competitiveness.
For companies with UK and EU consumers, opting for a local data centre can positively affect the online shopping experience. When data is stored closer to the end-user, the processing time on each interaction and transaction can be reduced. This leads to quicker transaction decisions, enhancing shoppers’ experiences, whilst improving the overall speed of applications and services.
For businesses, this can translate into higher customer satisfaction and a competitive edge in the market. These benefits are particularly important during high-traffic events such as holidays, limited edition stock drops, and sales. A local data centre can better support and scale alongside businesses during these peak moments.
That said, storing data in one physical space is akin to placing all the golden goose eggs in one basket. Whilst having data stored locally can facilitate quicker decision times, companies run the risk of losing access to valuable data should the data centre be physically compromised.
Storing data in one physical space is akin to placing all the golden goose eggs in one basket
In this case, companies should consider whether to opt for a segregated or non-segregated local data centre. A segregated EU data centre keeps consumer data strictly within Europe, meaning there are no backups should an accident or malicious attack occur. However, a non-segregated EU data centre is often part of a network, which can send EU-compliant data to other countries as backups.
This not only provides operational resiliency in the face of unforeseen challenges, but ensures global companies have ease of access to data when needed.
As digital commerce grows more competitive and regulated, businesses need to assess their data infrastructure to ensure compliance, improve operational efficiency, and protect valuable assets. Local data centres offer a strategic advantage by providing robust infrastructure, regulatory adherence, and enhanced performance capabilities.
This enables businesses to drive growth and safeguard consumer data, building trust in today’s complex and competitive online landscape. Within retail, we have seen that UK consumers are willing to spend on average 48% more with a retailer they trust. Leveraging a local data centre should be a key consideration for businesses to do just this.
Tod
Higinbotham, COO
Traditional vs advanced UPS battery technologies
at ZincFive, examines how immediate power solutions (IPS) are overcoming the limitations of traditional energy storage to meet the evolving demands of data centers.
In the ever-evolving landscape of energy storage, the choice of battery chemistry is a pivotal factor that can determine the success of our applications. Different battery technologies offer distinct advantages and disadvantages.
For instance, some chemistries excel in delivering high power for brief intervals, while others provide sustained energy over longer durations at lower discharge rates. Additionally, these batteries vary in safety, reliability, and sustainability. As energy storage becomes increasingly integral to the 21st century economy, it is critical to select the most appropriate battery solution for each application, rather than relying on a one-size-fits-all mentality.
Traditionally, long-duration energy storage has been encapsulated within the framework of energy storage systems, commonly used for applications such as powering electric vehicles and consumer electronics. Conversely, short-duration applications, which prioritise immediate power output, fall under the category of IPS. IPS applications demand instantaneous, high-rate power for durations ranging from minutes to microseconds. These solutions are vital across various sectors, including industrial manufacturing, electric vehicle charging infrastructure, and support systems that help long-duration energy storage and generation products achieve peak power.
One of the most fitting applications for IPS is in the realm of uninterruptible power supply systems, where a battery backup temporarily supplies power for a system until a longer-term power source comes online. These short but pivotal moments can have significant financial and reputational implications, especially for data centers. With the exponential growth of consumer electronics, IoT, and AI, our reliance on digital infrastructure is at an all-time high, making data centre uptime absolutely critical. To meet these escalating demands, data centre operators must contend with workplace safety, rising real estate costs, and increased sustainability expectations from regulators, investors, and clients. These pressures are driving a transition toward backup power solutions that deliver greater reliability, space efficiency, and environmental responsibility.
The mounting demands on data centres have created an environment where reliance on traditional UPS systems, often powered by energy storage systems, results in suboptimal performance. While legacy IPS solutions have historically been used, they frequently compromise on benefits such as footprint and sustainability. Fortunately, innovative IPS battery solutions have emerged, designed specifically to provide immediate, high-rate power essential for managing the critical transition between an outage and backup generator activation – all while improving on the shortcomings of legacy IPS and current energy storage systems.
Lead-acid batteries, a long-standing IPS technology, are often seen as a familiar and reliable choice for UPS in data centres. However, they are increasingly recognised for their limitations in size, sustainability, and power output, making them less suitable in today’s context. Many data centres have relied on lead-acid due to its affordability and widespread availability in the past, but the good news is that enhanced alternatives are now on the market.
Lithium-ion batteries are another option frequently considered by data centre operators for UPS systems, and their popularity grew due to their favourable weight and size compared to lead-acid solutions. As real estate costs escalate, the physical footprint of UPS systems has become a critical concern for operators looking to optimise space for revenue-generating servers. However, lithium-ion batteries are still classified as energy storage systems because their limited discharge rates are designed to mitigate safety concerns, thus failing to fully meet the immediate power needs of UPS systems.
Could nickel-zinc prove to be the most compelling option?
In contrast, nickel-zinc batteries present a compelling immediate power solution, boasting significantly higher power density than both lead-acid and lithium-ion batteries. They can deliver immediate power to an entire data centre while occupying less than half the space of traditional lead-acid systems, thus allowing for additional servers and increased revenue potential. Furthermore, nickel-zinc batteries enhance reliability and sidestep the thermal runaway risks associated with lithium batteries. Their lifecycle emissions are also substantially lower than those of lithium and lead-acid alternatives, with reduced resource consumption and environmental impact.
Transitioning to advanced battery technologies previously posed challenges, including compatibility issues and high retrofitting costs, often stemming from the specialised safety equipment required for lithium-ion systems. However, recent developments in UPS cabinets designed for seamless integration into existing setups have changed the game. These innovations facilitate the straightforward replacement of energy storage systems with IPS, enabling data centres to enhance efficiency, safety, and sustainability without the need for extensive system overhauls.
As data centres continue to be crucial players in driving the global economy, reliable UPS systems are indispensable. The need to increase power density and eliminate outage risks efficiently has shown that we need to look beyond traditional ESS systems. With the barriers to adopting advanced technology now addressed, data centres are well-positioned to maximise their reliability, safety, and efficiency by embracing immediate power solutions in their UPS systemss.
AMassimo Muzzi,
Head of Strategy, Business Development and Sustainability at ABB
Electrification, explains how data centres can tackle rising energy demands and regulatory pressures through efficient electrification, renewable integration, and advanced energy management, ensuring sustainability and reliability.
s our personal and professional lives become increasingly digitised, technologies such as AI are reshaping how we produce, manage and consume data. From personalised streaming services to predictive industrial maintenance, AI-driven innovations rely heavily on data centres that store, process, and analyse vast and rapidly growing volumes of information.
The rising tide of data demand has a direct impact on power consumption. In 2022, data centres accounted for about 2% of global electricity use, equivalent to approximately 460 TWh, and projections suggest that this figure could potentially double by 2026. AI workloads are a key driver, with AI-enabled server racks consuming around four times more energy than conventiaonal servers. Meeting this growing demand for power and cooling places new pressures on data centre infrastructure, emphasising the need for reliable and resilient electrification strategies.
Addressing efficiency, sustainability, and regulation
As data centres expand in capacity and density, their operators face mounting scrutiny on energy use. Tighter government regulations and evolving industry standards, such as the European Union’s Climate Neutral Data Centre Pact, push for greater energy efficiency and a transition towards renewable power sources. With the EU aiming for 75% renewable energy use by 2025 and 100% by 2030, global data centre operators must consider integrating low-carbon electricity and improving energy management.
Beyond regulatory compliance, sustainability has become a competitive differentiator. Implementing energy-efficient power distribution, renewable integration, and advanced management software can reduce environmental impact and enhance scalability, uptime, and overall resilience. At a time when AI, machine learning, and 5G continue to boost data workloads, investing in solutions that minimise energy consumption and operational costs is both an environmental and a business imperative.
Sustainability has become a competitive differentiator
Optimising data centre energy management
Meeting escalating data demands requires an integrated approach to electrification. Combining next-generation switchgear, circuit breakers, uninterruptible power supplies, renewable integration, and battery energy storage systems can help data centres balance high performance with more efficient power usage. Selecting the right mix of technologies allows operators to adapt to new workloads, manage fluctuating power requirements, and safeguard reliability.
Critically, software solutions serve as the glue that binds these elements together. Intelligent energy management platforms offer real-time monitoring, optimisation of power flows, and predictive maintenance capabilities. By providing visibility into consumption and operational metrics, these tools can help operators reduce waste, rapidly detect potential issues, and streamline capacity planning.
Supporting scalability and uptime
Improving energy efficiency directly supports data centre scalability. Reducing power loss, improving equipment performance, and ensuring consistent cooling can free up capacity for more resource-intensive applications and higher data traffic volumes. In turn, this underpins the industry’s pursuit of the coveted 99.999% (five nines) target for service availability. With fewer power disruptions and smoother operations, data centres can comfortably handle growth while maintaining the service reliability end-users have come to expect.
Enhancing grid availability is also vital. A stable and consistent electricity supply reduces reliance on backup generators and cuts maintenance and fuel costs. For operators, better grid reliability means more predictable expansion strategies, as infrastructure investments can be planned confidently. The payoff is a more substantial business case for modernisation and the ability to deliver uninterrupted, high-quality services.
Addressing energy challenges in data centres is not a solo endeavour. Operators, technology providers, and power utilities must collaborate to integrate renewable energy sources, advanced power infrastructure, and robust monitoring systems seamlessly. Industry players that offer comprehensive solutions, from high-efficiency switchgear and transformers to UPS systems and microgrid integration, can help data centre operators navigate this complex landscape.
For example, solutions that combine renewable energy inputs with battery storage and intelligent control systems enable data centres to reduce their carbon footprint, maintain business continuity, and align with evolving regulations. By partnering with experienced suppliers and leveraging their expertise, operators can better balance the goals of efficiency, uptime, and sustainability.
As data volumes continue to climb, optimising energy usage and ensuring reliable electrification will remain central concerns. Data centre operators can create a more sustainable, flexible, and futureproof infrastructure with the right mix of efficient technologies, digital monitoring tools, and renewable energy integration. Embracing these approaches will meet the world’s growing hunger for data and ensure that tomorrow’s data centres are equipped to handle the challenges and opportunities of an increasingly data-driven and energy-hungry world.
Amit Sanyal, Senior Director of Data Centre Product Marketing at Juniper Networks, delves into the strategies and technologies driving greener and more efficient data centre operations.
As the demand for data centres skyrockets and new facilities emerge at a remarkable pace, the need for sustainable solutions has become a key industry priority.
With energy consumption and carbon emissions reaching critical levels, the call for innovation is more pressing than ever. Addressing these challenges demands a reimagining of infrastructure, energy sources and operational practices to pave the way for a greener future.
2025 is poised to bring significant advancements to data centres, driven by three key trends aiming to make them greener and more sustainable.
Smarter data centres are coming
2025 will see the rise of AI and automation transforming the way data centres operate, making them more energy-efficient and sustainable. By leveraging AI-driven algorithms, data centres can monitor energy usage in real-time and make dynamic adjustments to refine consumption. This ensures that power is allocated precisely where and when it is needed, minimising waste and therefore lowering operational costs.
Predictive maintenance is another critical area where AI is having a significant impact on sustainability. By analysing data from sensors and systems, AI can identify potential equipment failures before they happen. This proactive approach reduces downtime, extends the lifespan of critical hardware, and prevents energy inefficiencies associated with malfunctioning equipment.
AI-powered dynamic load balancing is also a critical trend in driving greener data centres in 2025. By intelligently distributing workloads across servers, this technology minimises energy waste by preventing overloading and underutilisation of resources. Optimising server usage reduces the overall energy demand, enabling data centres to operate more efficiently while maintaining peak performance. This approach directly contributes to lowering carbon footprints and aligns with the industry’s shift toward sustainable practices, making it an essential component in the pursuit of greener, more energy-efficient data centre operations.
Optical connectivity consumes a significant portion of network power for data centres. In 2025, we will see data centres increasingly turning to energy-efficient optical technologies such as Co-Packaged Optics (CPO), Linear Pluggable Optics (LPO), and Linear Receive Optics (LRO), which are revolutionising high-speed data transmission. These optical modules consume less power while optimising bandwidth, enabling faster data transfers with lower latency and a reduced environmental impact. By adopting these innovations, data centres can better handle the ever-increasing volume of data without a proportional increase in energy use.
As AI workloads grow, the cooling demands of high-performance processors are surpassing the capabilities of conventional methods. To address cooling challenges, some companies are building facilities in colder climates, leveraging naturally lower temperatures to reduce energy-intensive cooling requirements. The adoption of Direct Liquid Cooling (DLC) is also gaining traction, where coolant is circulated through pipes directly to the hottest components, such as GPUs and CPUs, transferring heat more efficiently than traditional air cooling.
Additionally, innovative cooling strategies, like liquid immersion cooling, are also gaining attention as transformative solutions. In liquid
immersion cooling, servers are submerged in specialised dielectric fluid, which offers superior heat dissipation compared to conventional air cooling. The fluid directly absorbs heat from components and circulates it away, significantly reducing reliance on energy-intensive air conditioning systems. This solution offers superior efficiency, contributes to substantial power savings and reduces noise.
AI-driven cooling systems are also emerging as the energy demands of AI applications continue to rise. These systems use sensors and machine learning algorithms to optimise cooling dynamically, making real-time adjustments to prevent overcooling and reduce energy consumption. Predictive maintenance and dynamic load balancing further maximise resource utilisation, minimise energy waste and lower the carbon footprint of data centres, enabling measurable reductions in environmental impact.
In 2025, by adopting energy-efficient optics, liquid cooling and AIdriven systems, data centres will not only achieve sustainability targets but also set new benchmarks for sustainable operations amidst growing digital demands.
By leveraging AI-driven algorithms, data centres can monitor energy usage in real-time and make dynamic adjustments to refine consumption
In 2025, data centres are set to increasingly adopt nuclear power to meet rising compute demands while reducing energy consumption and carbon footprints. As a cornerstone of sustainable energy, nuclear power is gaining prominence for its reliability and low environmental impact.
Small modular reactors (SMRs) and other scalable nuclear solutions provide a reliable, low-carbon energy source, well-suited to the energyintensive requirements of AI and other advanced technologies. With high energy density and near-zero carbon emissions, nuclear power stands out as a viable alternative to traditional fossil fuels, enabling data centres to achieve enhanced sustainability and energy security while supporting the global transition toward a greener future.
By combining these advancements with initiatives such as investments in renewable energy and optimised cooling systems, data centres are poised to achieve net zero in the near future. These efforts lower operational costs and contribute to broader environmental objectives, setting new industry benchmarks for energy efficiency and sustainability. The integration of energy-efficient optics and nuclear power further accelerates the transition to greener, more sustainable data centres, making this vision increasingly achievable in 2025 and beyond.
Embracing cutting-edge technologies, committing to tangible net-zero goals and prioritising environmentally-conscious strategies, data centres are positioned to play a key role in shaping a sustainable digital future, all while driving innovation and maintaining global connectivity.
TChris Payne,
Co-founder & Director
of PureTec Separations Ltd, examines the role of cutting-edge water treatment technologies in ensuring that data centres can grow without compromising their sustainability commitments.
he changing landscape of computing power, largely fueled by advancements in AI, is placing new demands on data centres worldwide. Industry forecasts suggest computing power will increase significantly in the next five years, with AI alone anticipated to require 6.6 billion litres of water annually by 2027.
As critical enablers of the digital economy, data centres must navigate this rise in demand while balancing environmental compliance and public accountability. With sustainability now a global priority, efficient water treatment emerges as a vital component in achieving scalable and eco-friendly growth.
Water plays an indispensable role in data centre operations, with treatment required across three primary areas: pretreatment of incoming water, processing water used in cooling systems, and managing wastewater. Optimising these processes can help data centres achieve their dual objectives of enhanced performance and sustainability.
The quality of water entering a data centre directly influences the efficiency and longevity of its cooling systems. Pretreatment removes impurities, minimises scaling, and reduces corrosion risks, ensuring operational reliability. Cooling systems are among the largest consumers of water in data centres. Effective water treatment processes ensure these systems operate at peak efficiency while minimising environmental impact. Innovations in this area can significantly reduce water and energy consumption, creating a more sustainable operational framework.
As data centres expand, wastewater management becomes increasingly critical. Traditional disposal methods are expensive and environmentally unsustainable. Zero liquid discharge (ZLD) systems offer a transformative alternative by recovering water for reuse within the facility. Additionally, incorporating treated effluent from external sources, such as nearby industrial sites, into the cooling process can reduce reliance on freshwater resources and regional supply – a particularly pressing issue highlighted by the recent concerns raised around AI growth zones, like the one planned in Oxfordshire.
With AI adoption driving exponential growth in computing power, scalable and sustainable water treatment systems are essential. Advanced technology, tailored system design, and strategic resource management are all key to meeting the challenges ahead.
Each data centre has specific requirements based on factors such as local water quality, facility size, and cooling system design. Bespoke water treatment systems offer the flexibility to meet current demands and adapt to future growth.
Data-driven decision-making is essential for efficient scaling. Predictive performance analytics, supported by trending tools and machine learning algorithms, enable real-time monitoring of system efficiency. These insights help identify inefficiencies, predict maintenance needs, and optimise water use. This will ultimately ensure that reliable performance remains during times of rapid growth.
High-efficiency water treatment technologies are also central to reducing both water and energy consumption. For example, variable power RO systems adjust energy usage based on real-time demand, while EDI technologies eliminate the need for hazardous chemicals, simplifying operations and lowering environmental impact.
Sustainability in data centre operations increasingly depends on reducing freshwater dependency through reuse and recycling systems. Treated wastewater can be repurposed for cooling or cleaning, creating a closed-loop system that minimises waste. ZLD systems further enhance efficiency by recovering all usable water, leaving no liquid waste for disposal.
Utilising non-freshwater sources, such as seawater or brackish water, is another effective strategy. Advanced desalination technologies make these sources viable, while treated effluent from industrial or municipal processes can be integrated into operations, reducing strain on local water supplies.
The relentless pace of technological advancement presents a dual challenge of scaling up operations while maintaining sustainability. By embracing innovative technologies and best practices, data centres can turn water treatment into a strategic advantage — similar to sectors like food manufacturing and power generation, where optimised water use drives performance.
Efficient water treatment is not only vital for meeting immediate operational needs but also for aligning with broader sustainability goals. As the industry enters an era of unprecedented growth, scalable and sustainable water management solutions will be key to ensuring success.
In an industry which is fast paced and volatile, you need to be as agile as your customers, we know what data centre and mission critical facility operators demand. Centum™ Force Series, our game-changing series of containerised solutions have you covered with Cummins renowned robust power in the most flexible package possible.
• Stackable design – maximise space and increase power
• Accelerated startup – faster commissioning and cost saving
• Integrated systems – for faster repositioning or redeployment
• Global support – wherever you need power, we’ve got your back