Data Centre Review Spring 2024

Page 1

Smart, Scalable, Sustainable Power for your Data Centre 98.1% ultra-high efficiency Reliable power for a sustainable world Visit us on stand D430 at Data Centre World Cooling in focus A deep dive into data centre cooling challenges Networking Network connectivity in the age of AI Regulations Are you prepared for sustainability scrutiny? 20 30 39 SPRING 2024 DATA CENTRE REV I EW
Our powerful portfolio of brands: CADDY ERICO HOFFMAN RAYCHEM SCHROFF TRACER 500,000 kW of CDUs provided globally in 2023 The trusted liquid cooling experts SCAN FOR MORE INFO nVent.com ©2024 nVent. N00957-2401

This issue

Features

8 • Sustainability

Scott Tease at Lenovo Infrastructure Solutions Group outlines how to build the sustainable data centre of the future.

10 • Water Usage

How can we prioritise sustainable water use in data centres? VIRTUS’ David Watkins shares his thoughts.

15 • Cloud Computing

Azul’s Simon Ritter takes a look at how optimising Java-based infrastructure can save on cloud overspend and improve efficiency.

20 • Regulations

Jon Brooks of QiO Technologies explores whether the sector is prepared for increased sustainability scrutiny.

22 • Cooling

Red Helix’s Liam Jackson explains how adopting liquid cooling solutions can drive both cost and energy efficiencies.

26 • Energy Management

Louis McGarry at Centiel UK dives into how data centres can better monitor and manage their energy consumption.

30 • Networking

Paul Gampe of Console Connect explores the complex relationship between generative AI and network connectivity.

32 • Battery Storage

ESS Inc’s Alan Greenshields explains how long-duration energy storage (LDES) could be key in enabling the AI revolution.

Regulars

37 • Final Say

John Booth, Chair of the DCA – Energy Efficiency & Standards Steering Committee, discusses the discrepancies of data centre energy consumption reporting.

8
15
32
26 37
DATA CENTRE REVIEW
Spring 2024

Editor’s Comment

90 seconds to midnight

“A moment of historic danger: it is still 90 seconds to midnight,” proclaims the Bulletin of the Atomic Scientists’ Doomsday Clock website.

I don’t know about you, but the constant warnings about how close we are to collective destruction over the last five or so years has left me, for lack of a better word, a tad desensitised. It feels like against the crushing tidal wave of climate disaster, my tiny boat (on which I’m frantically recycling cereal boxes and drinking out of paper straws) is going to end up upside down at the bottom of the bay. It’s easier – and better for the good old mental health – to just switch off and focus on trivial, daily challenges that fill up my time.

And that’s exactly the problem – complacency, or nihilism, or whatever you want to call it, means that our steamroll towards an uninhabitable planet charges on. It’s something the sector has struggled with; we’re constantly talking about how this powerhungry industry needs to make some real, practical changes – but the implementation of that kind of change is more difficult, and costly, in practice.

However, things might be about to change now that a host of sustainability regulations are on the horizon. With the likes of the EU’s Corporate Sustainability Reporting Directive (CSRD) and Energy Efficiency Directive (EED) (to name but a few) coming into force, will now be the time when digital infrastructure finally makes good on its promises? And more importantly, is the sector in the position to do it quickly enough?

With AI, HPC and heightened demand only adding more complexity to the mix, the industry is facing a turning point – how well we will meet this challenge remains to be seen.

Turning to more domestic matters, we have quite a few exciting new projects taking place here at DCR.

April will see our brand-new, must-attend event that will explore the challenges facing data centre cooling – details on the agenda and how to register can be found on page 39.

The ER & DCR Excellence Awards will also be taking place in May, showcasing the best and the brightest people, projects and products in the sector – so make sure you book your ticket for this exciting night, which will be taking place at Christ Church in Spitalfields.

If you’d like to get in touch to share any news, views or reviews, as always, please don’t hesitate to drop me an email at kayleigh@ datacentrereview.com, and find us on X (@dcrmagazine) and on LinkedIn (Data Centre Review).

CONTRIBUTING EDITOR

Jordan O’Brien

jordano@sjpbusinessmedia.com

DESIGN & DIGITAL PRODUCTION

Rob Castles

robc@sjpbusinessmedia.com

GROUP ACCOUNT DIRECTOR

Sunny Nehru

+44 (0) 207 062 2539

sunnyn@sjpbusinessmedia.com

BUSINESS DEVELOPMENT MANAGER

Tom Packham

+44 (0)7741 911 317

tomp@sjpbusinessmedia.com

GROUP COMMERCIAL DIRECTOR

Fidi Neophytou

+44 (0) 7741 911302 fidin@sjpbusinessmedia.com

PUBLISHER

Wayne Darroch

PRINTING BY Buxton

Paid subscription enquiries: subscriptions@datacentrereview.co.uk

SJP Business Media

Room 4.13, 27 Clements Lane London, EC4N 7AE

Subscription rates:

UK £221 per year, Overseas £262

Data Centre Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.datacentrereview.co.uk/register

Data Centre Review is published by

Industry-leading b2b technology brands

Room 4.13, 27 Clements Lane London, EC4N 7AE 0207 062 2526

Any article in this journal represents the opinions of the author.

This does not necessarily reflect the views of Data Centre Review or its publisher – SJP Business Media

ISSN 0013-4384 – All editorial contents © SJP Business Media

Follow us on X @DCRmagazine

Join us on LinkedIn

EDITOR Kayleigh Hutchins kayleigh@datacentrereview.com
4 www.datacentrereview.com Spring 2024
WE CAN POWER YOUR DATA CENTER... ANYWHERE High Efficiency Modules 15MW, 20MW and 40MW Microgrid Ready H2, HVO, Bio Fuels Ready Mobile power blocks 6MW and 16.5MW HIGHEST POWER DENSITY AND EFFICIENCY PRIMARY POWER FLEXIBLE, MODULAR, SCALABLE

WThe next generation of modular UPS

Riello UPS Business Development Manager, Chris Cutler, explores the next generation of ultra-high efficiency modular UPS systems and how they will help data centre operators reduce their total cost of ownership.

hile every UPS manufacturer strives to achieve 100% efficiency, current technology – not to mention the laws of physics – makes that impossible. However, ongoing developments mean that we can now get pretty close.

The latest generation of modular uninterruptible power supplies is capable of exceptionally high efficiency while still delivering the robust and reliable protection that data centres rely on.

A brief history lesson

In years past, most UPS systems were designed and manufactured using a two-level architecture inverter and could typically deliver efficiency of around 96%.

Designs evolved from two to three-level architectures, which needed more components and processing power to control, but helped boost efficiency up to 96.5%. Adapting the materials used in filter design enabled UPS manufacturers to push efficiency levels upwards of 97%.

So what’s the next big step? How can we boost efficiency even further?

The biggest change is likely to come in the shift from using traditional silicon based IGBT components to silicon carbide (SiC) semiconductors. Of course, SiC is nothing new; it’s widely used in the electric car industry, which has helped make the technology more readily available.

In UPS manufacturing, silicon carbide components offer several inherent advantages over standard IGBTs: they are more efficient as they exhibit lower electrical resistance, which reduces energy losses; deliver increased power density; and can operate at higher temperatures.

SiC also has faster switching capabilities, resulting in a more responsive UPS, and it is more durable than IGBT too, leading to extended component lifecycles and reduced maintenance needs.

Smart, scalable, sustainable power

Multi Power2 is the evolution of our modular UPS range. Building on the near decade of success of the original Multi Power series, Multi Power2 comprises the 500 kW MP2 and the scalable M2S, which is available in 1,000-1,250-1,600 kW versions. Up to four UPS can be installed in parallel, enabling the M2S to protect data centres requiring up to 6,400 kW of power in a single system.

The modular nature of the solution offers risk-free ‘pay as you grow’ scalability, which reduces the likelihood of wasteful oversizing and helps optimise the initial investment.

Embracing state-of-the-art silicon carbide components, the new range features high density 67 kW power modules that enable the UPS to achieve best-in-class ultra-high efficiency of up to 98.1% in

online mode, ensuring maximum protection to the critical load, whilst minimising a data centre’s operating costs and energy losses.

Before we move on, just a quick point on UPS efficiency. When stating their efficiency ratings, many UPS manufacturers use an ‘average efficiency’ figure, which effectively is a combination of the UPS operating modes over a period of time.

For example, the period might cover 24 hours split between 12 hours running in online mode followed by 12 hours in ‘ECO’ or economy mode, where a boost in efficiency comes alongside a trade-off of reduced protection to the critical load.

For some applications this is perfectly acceptable, but not for a data centre. At some point, you’ll be exposing your infrastructure to higher levels of risk from a power disruption.

These average efficiency ratings aren’t a true reflection of the real-life operating conditions and will result in artificially inflated efficiency values. With Multi Power2, you can be assured that the 98.1% efficiency is in true online UPS mode – there’s no reliance on the ‘smoke and mirrors’ of average efficiency to inflate the actual figure.

Efficiency at all load levels

Thanks to its Smart Modular Architecture and intuitive Efficiency Control Mode, Multi Power2 delivers ultra-high efficiency even when your data centre is running at lower loads.

Efficiency Control Mode is a sophisticated feature where the UPS’ microprocessors and control software only activate the required number of power modules according to the real-time load and required redundancy, putting any surplus modules into standby.

Doing so ensures the UPS consistently operates at optimum efficiency. Standby power modules can stay in this energy-saving state for several hours, before swapping with one of the active modules to ensure components age at a similar rate.

If there’s any disruption to the mains supply, all the inactive modules immediately restart to provide maximum protection. The same happens if there’s a fault with any of the modules or a sudden increase in the data centre’s load.

Practical proof of savings

As you’d expect, ultra-high efficiency modular UPS such as Multi Power2 deliver cost, energy, and carbon emissions savings when replacing old legacy monolithic UPS systems. But they also significantly outperform standard modular solutions too.

What follows are two actual TCO calculations provided to data centres in the process of upgrading their UPS. Comparison 1 outlines the savings

SPONSORED FEATURE
6 www.datacentrereview.com Spring 2024

With Multi Power2, you can be assured that the 98.1% efficiency is in true online UPS mode – there’s no reliance on the ‘smoke and mirrors’ of average efficiency to inflate the actual figure

as a result of replacing a monolithic 1 MW N+1 UPS system (made up of 3 x 12 pulse 600 kVA 0.9pf UPS) with a 1,250 kW Multi Power2.

Total annual energy cost saving £95,759.65

Overall total annual cost saving £117,544.13

Total annual CO2 saving 148.7 Tonnes

Total 15-year cost saving £2,353,655.21

Total 15-year CO2 saving 1702.8 Tonnes

While Comparison 2 showcases the savings delivered by a 1,250 kW Multi Power2 at 1 MW load versus another manufacturer’s latest modular offering, with their system comprising 3 x 400 kVA modular units.

Total annual energy cost saving £51,099.24

Overall total annual cost saving £53,839.91

Total annual CO2 saving 79.4 Tonnes

Total 15-year cost saving £1,078,068.22

Total 15-year CO2 saving 908.7 Tonnes

As well as these substantial cost and carbon savings, the innovative design behind Multi Power2 delivers other significant benefits.

For example, capacitors usually need replacing between years five to seven of a UPS’ service life. Over the course of a typical 15-year lifespan, they would require swapping out two or potentially three times, which all adds up.

However, the long-life components used in Multi Power2 mean it is realistic to go through the entire 15-year lifespan of the product without having to replace the capacitors at all. This results in fewer end-of-life components and materials that require recycling and disposal too.

Because of its robust design that avoids any single point of failure and the hot-swappable nature of all modules and major components, Multi Power2 combines a high MTBF (Mean Time Between Failures) with a low Mean Time To Repair (MTTR) of less than 30 minutes. Sophisticated humidity and temperature sensors incorporated into every power module help to build up a full thermal map and assist with monitoring and predictive maintenance.

Taking all these factors into account, a typical site could save more than £80,000 to £120,000 in maintenance costs over the 15-year lifespan of the UPS. This figure doesn’t even consider some of the additional knock-on environmental benefits such as reducing the amount of miles service engineers will travel, as a result of fewer maintenance visits. Riello UPS will be exhibiting at Data Centre World 2024 at ExCeL London on 6-7 March. The team will be available at stand D430 to showcase its range of data centre solutions such as Multi Power2, industry-leading UPS maintenance plans, and a five-year extended warranty on all UPS systems up to and including 3 kVA. Register for a FREE visitor pass at www.datacentreworld.com/RielloUPS

Spring 2024 www.datacentrereview.com 7 EDIT SPONSORED FEATURE

GREEN MACHINE BUILDING THE FUTURE

at Lenovo Infrastructure Solutions Group, explores the steps needed to build the sustainable data centre of the future.

ontrary to what some people may believe, very little in business happens by luck. Success in anything results from planning, revising those plans based on new conditions, and adjusting resources to accommodate changes. Success in the next era of AI computing will be no different. The adjustment that everybody is making is around the concepts of sustainability and the environmental impact of our IT choices.

Why the focus on data centre sustainability now? Simply put, data has become ubiquitous in our lives. Since 2010, the number of people using

the internet has more than doubled, according to the World Bank, with internet traffic multiplying more than 20-fold. Add to that, billions of connected devices and sensors coming online every day across the world.

All that data requires data centres to process our orders, diagnose our cars and ourselves, store the movies we stream, and get us a ride from the airport. All those data centres require electricity to operate. In fact, it is estimated that today, data centres consume 3% of the electrical power generated. With the introduction of AI applications like Chat GPT or Bard, this consumption may double again in far less than a decade.

Organisations need to establish a sustainability strategy that is clearly guided by policies and practices that consider every stage of the IT lifecycle. They must take sustainability into account across the whole of their value chain, stretching from logistics to services.

Collaboration and considering vendor practices

Most IT customers view an asset as ‘theirs’ upon delivery. When building a sustainable IT practice, consideration must be given to the value chain before the truck arrives at the loading dock. This changes the way you interact with your supply chain, making it much more collaborative and can require difficult conversations with suppliers and partners.

8 www.datacentrereview.com Spring 2024

From product concept through build, vendors should be evaluated on their commitment to reducing the environmental impact of manufacturing the IT gear you buy. Do they design for sustainability with recyclable materials? Does their factory utilise renewable power? Can they test on-site without shipping to another facility? Do they look for ways to reduce power in the build process?

Once the product is completed, how is it packaged? Does the vendor have a commitment to reducing ocean bound plastics (OBPs) and other environmentally harmful materials? Is there an option to ‘bulk’ ship to save on both packaging and fuel?

Changing the biggest impact on IT carbon footprint

While designing, building and shipping a product has an impact on its carbon footprint, the largest contributor is the actual operation of the system itself. Most enterprise IT systems are in 24/7 operation for four to beyond seven years. Reducing the power an IT asset consumes will have the largest impact on sustainability. Sustainability in the data centre requires planning at every stage of the process, from how the mix of energy is used to how servers are cooled.

Most power companies use multiple methods of generating electricity. Though burning fossil fuels is still the dominant means, renewables like solar, wind, and hydro are growing. Customers should see if they can specify renewable sources from their power provider.

Once powered on, 30-40% of the electricity consumed by the data centre is used for one purpose: cooling. In a traditional air-cooled data centre, cold air is produced from the HVAC system, pushed into the ‘cold aisle’ at the front of a rack, and then the system fans pull the cold air over the system components to remove the heat. The now warm air is expelled to the ‘hot aisle’ and rises back into the air handling units, up to the HVAC system to begin the cycle again. That 30-40% does not process

Sustainability in the data centre requires planning at every stage of the process

an order or hail an Uber. It just moves cold air from the front of the rack to the rear and turns it into hot air. Clearly, rethinking how cooling in the data centre and inside the servers is one of the first places we should look.

The demand for data-intensive, power-hungry applications isn’t going to abate in the coming years, with CPUs and GPUs continuing to draw more power. So how do we cool the components of a system without fans and traditional air conditioning? The answer is liquid.

Liquid cooling is a technique that dates to mainframes starting in the 1960s. Today, liquid cooling technologies are a key factor in sustainable computing and will also enable the next generation of supercomputers built to tackle large challenges such as climate change.

Even traditional air-cooled systems can become more efficient by implementing energy control software that balances performance and energy usage, and can slow the fans down depending on the workload being run.

By reducing the power consumed during the life of the asset, we can have the greatest impact on its carbon footprint. So, how can we negate all that power consumed? Certainly, the industry has responded with offerings like carbon offset credits that promise to equalise a system’s total carbon footprint. But wouldn’t it be better to minimise the footprint in the first place?

End of life

When an IT asset has reached the end of its useful life, how it is disposed can have a tremendous impact on the carbon footprint. Often the disposal decision is more financial than technical: the equipment has reached the end of its lease, and newer systems will cost less on a monthly basis. Much of it ends up shipped overseas to be scrapped, with very negative environmental consequences in those localities.

Offerings like Lenovo’s asset recovery services (ARS) ensure that an end-of-life system is reused as much as possible, with as small a portion of the system being scrapped. Users receive a welcome cash injection, and Lenovo can use the core materials, which have lower environmental impact than virgin materials, to recycle into the next generation of products.

Data centres and the future

The demand for computing power to drive business intelligence and scientific breakthroughs is only going to intensify. In the past, IT folks were given one command: “Get it (whatever ‘it’ was) running as soon as possible!” Today, they are being asked to balance that drive for IT results with helping achieve the enterprise’s ESG goals.

To genuinely have a longer lasting, meaningful impact, IT leaders must take a holistic view of the product from development to end-of-life, using asset recovery services to find the most sustainable way for products to be reused and recycled to find a pathway to sustainability success. For business leaders, choosing the environmentally friendly option to refurbish or recycle is also a great way to drive a better bottom line, saving money on IT while also helping to save the planet.

Spring 2024 www.datacentrereview.com 9 SUSTAINABILITY

Solving the data centre

water dilemma

AHow can we prioritise sustainable water use in data centres?

t the heart of our digital existence, data centres quietly hold the key to all that we do. From streaming our favourite shows to managing our finances and accessing a world of information, data centres are the silent architects of convenience and connectivity.

However, their usage and growth need to be managed both economically and sustainably. The rapid proliferation of data centres has understandably led to critical questions being asked about their environmental impact, particularly concerning water usage in cooling technologies.

The big water issue

As data centres expand to meet an ever-increasing demand which is being fuelled even further by AI, some facilities require significant amounts of water for cooling and other operational processes in order to keep systems running. In some cases, this reliance on water resources can place stress on local water supplies, causing concerns about water scarcity in regions already grappling with this challenge. It’s been the subject of some headlines and understandably people within and outside the industry are concerned.

According to research carried out by Savills, it is thought that a data centre may use up to 26 million litres of water each year, on average, per megawatt of data centre power. Although this appears to be a worrying statistic, it should be acknowledged that unnecessary water leakage caused by water companies is also

10 www.datacentrereview.com Spring 2024

a major reason for concern. According to OFWAT, in 2020-21, England and Wales leaked 51 litres of water per person per day, and in Scotland and Northern Ireland this figure was above 80 litres of water.

What should also be understood is that many large data centres use ‘closed loop’ chilled water systems – meaning that water is charged into the system during construction and then continually circulated within a facility, rather than needing new water consistently pumped into the building. A large-scale data centre will be filled with around 360,000 litres of water initially, or the equivalent of a 25 metre local swimming pool. This water will remain in the system for the lifespan of the data centre, typically a minimum of 15 years.

Committed to innovation

It should also be recognised that the data centre industry has long been committed to ensuring sustainability and efficiency, with providers working hard to use resources including power and water responsibly. In response to these challenges, data centre operators are embracing innovation as a cornerstone of their sustainability efforts. Indeed, companies in the sector are continually looking to innovative sustainability strategies that include ‘green’ renewable sources of power, rainwater harvesting, zero water cooling systems, recycling, waste management and much more.

A good example of this in practice is the strategic re-evaluation of cooling equipment. By altering the point in the cooling cycle at which water is introduced, operators can make substantial reductions in water consumption. Implementing this approach, along with other efficiency initiatives can save up to 55% of water consumption, and reduce the use of associated consumables such as water filters and associated maintenance. These kinds of innovative approaches help to ensure that water usage is minimised precisely when and where it matters most – during periods of the highest outside temperatures and in areas subject to water shortage.

A large-scale data centre will be filled with around 360,000 litres of water initially, or the equivalent of a 25 metre local swimming pool

In a significant shift towards sustainability, an increasing number of data centre operators are opting to only use renewable energy sources to power their facilities. This transition not only serves to substantially reduce their carbon footprint, mitigating the environmental impact of their operations, but it also reaffirms their commitment to sustainable practices.

Leading by example

By managing the point in the cooling cycle at which water is introduced, VIRTUS has achieved significant reductions in water usage; for example, at our LONDON2 data centre in Hayes, London.

In the UK, where ambient conditions provide the ideal backdrop, VIRTUS harnessed adiabatic cooling technology to cool the data halls efficiently. Leveraging the day/night cycle, free cooling was implemented to maintain the desired temperature within the facility. LONDON2 is also located above a natural aquifer, enabling the use of water that is not drawn from the public supply.

VIRTUS also powers facilities with 100% renewable energy sources combined with Power Purchase Agreements (PPAs). These agreements allow operators to directly purchase energy from a renewable energy provider over a set period of time. Companies that do this help the renewable energy industry to finance new renewable projects, increasing the availability of renewable energy on the market, and supports the growth of sustainable energy generation.

Collaborative responsibility

Like so many major challenges, it’s important to remember that achieving sustainable data centres is a goal that cannot be reached in isolation. It necessitates industry-wide collaboration and knowledge-sharing to make real change possible. Data centre operators are already coming together to share best practices, techniques, and insights, with a particular focus on water-saving strategies. This collaborative approach augments the impact of sustainability initiatives and accelerates progress toward shared environmental objectives.

The collective responsibility of the data centre industry to reduce its environmental impact is an inspiring model that other sectors should learn from. As data centre operators unite in their commitment to sustainability, they set a powerful precedent for industries worldwide. They demonstrate that sustainability is not just a buzzword but a tangible goal that can be achieved through creativity, innovation, and concerted effort.

Spring 2024 www.datacentrereview.com 11 WATER USAGE

AI, ESG and early talent

Schneider Electric shapes the sustainable data centres of the future at Data Centre World London, 2024.

Today the demand for sustainability and the subsequent impact of AI are driving a significant step change across the data centre industry – causing businesses around the world to rethink their requirements for resilience and energy efficiency, and placing Environmental, Social, and Governance (ESG) at the heart of their decision-making. The energy crisis, for example, has brought about higher operating costs and supply uncertainty, and many companies have begun to rethink their deployment and energy strategies to ensure resilience of their missioncritical systems.

Data Centre World London is the industry’s largest gathering of professionals and end-users, and during its 2024 event, Schneider Electric will delve into the changing landscape of data centres – bringing together some of the sector’s most prominent leaders to discuss how we can transform the industry’s perception from ‘toxic dumps’ to ‘essential players in the fight against climate change’, and positioning data centres as the career path of choice for the next generation of talent.

Chaired by Kelly Becker, President, Schneider Electric UK and Ireland, the panel – which takes place on 6 March 2024 in the Design & Build and Physical Security Theatre – will feature industry figureheads including Leading Edge CXO, Lauren Ryder, Middlesex University student Madison Clements, World Wide Technology’s Chief Technology Adviser, David Locke, Pure Data Centres Group CEO, Dame Dawn Childs, and Schneider Electric’s Vice President for UK and Ireland’s Secure Power Division, Mark Yeeles, as they explore the industry’s ESG commitments, and how collectively, the industry can work together to spearhead action that enables sustainable change.

At Stand D730, Schneider Electric will bring together experts from its Secure Power, Digital Energy and Power Products, Services and Sustainability businesses to showcase its award-winning EcoStruxure for Data Centres portfolio

The impact of AI on data centre design

In just a short space of time, AI has had a profound effect on businesses and enterprises globally and remains one of the most prominent trends driving technological change in data centres. A recent report, for example, found that AI could consume as much electricity as the entire country of Ireland, while new research from JLL predicts that hyperscale data centres are projected to increase their rack density at a compound annual growth rate (CAGR) of 7.8%.

On 6 March at 11am, Schneider Electric’s Vice President for Data Centre and Innovation, Steven Carlini, will deliver a keynote speech exploring the challenges of deploying edge AI inference servers at-scale, and sharing key strategies to help solve issues surrounding grid availability, while enabling more sustainable, efficient operations. The session, which takes place in the Data Centre World Global Strategies: People, Environment & Innovation Theatre, will also share best practices to help businesses harness the

SPONSORED FEATURE
12 www.datacentrereview.com Spring 2024

potential of AI within data centres and discuss how operators can unlock new capabilities within new builds and legacy sites.

Matthew Baynes, Vice President of Design and Construction Partners at Schneider Electric Europe, will also join a panel alongside the Open Compute Project, exploring how technologies such as machine learning, liquid cooling and digital twins are enabling businesses to optimise their data centres ready to host AI and HPC clusters. This session takes place on 7 March from 11.55am to 12.30pm in the Energy Efficiency, Cost Management & DCIM Theatre.

The resilient data centres of the future

At Stand D730, Schneider Electric will bring together experts from its Secure Power, Digital Energy and Power Products, Services and Sustainability businesses to showcase its award-winning EcoStruxure for Data Centres portfolio – providing guided tours and virtual product demonstrations to illustrate how end-users and operators can make their infrastructure systems more sustainable, resilient and energy efficient.

Its dedicated critical power zone, for example, will feature its Galaxy VS range of uninterruptible power supplies (UPS) with Lithium-ion batteries, including its patented eConversion technology, which offers upto 99% efficiency, without compromising availability. Also available are its Easy Modular UPS solutions, the company’s latest medium UPS offer, which provide an easy to use, easy to service and easy to maintain critical power protection solution, with optimum reliability.

Additionally, the zone will showcase Schneider Electric’s APC NetShelter integrated rack systems and Smart-UPS Ultra – the industry’s first 3 kW and 5 kW 1U single-phase UPS, which are designed to deliver more power, flexibility, and intelligent monitoring in the smallest footprint. Smart-UPS Ultra enables IT and data centre professionals to address many of the challenges with deploying infrastructure in distributed IT and edge computing environments, enabling efficiency and industry-leading uptime for critical applications everywhere.

The Critical Power zone will also host the breadth of Schneider Electric’s complete Power Systems offerings, including its medium voltage (MV) SMAirSeT, an SF6-free Modular MV Switchboard – a green and digital, air-insulated switchgear solution, which combines pure air and vacuum technology to replace greenhouse or alternative gases (GHGs).

Further, at 11:30am, Ionut Farcas, Senior Vice President, Power Products, will provide visitors with an exclusive first look as Schneider Electric unveils a brand-new, ground-breaking new product innovation, combining scalability, durability, and connectivity to enhance the future of data centre performance, uptime, and safety.

Software and digital services

2024 will mark the biggest step-change that European data centre owners and operators have seen in years, becoming legitimately accountable for their energy consumption via the Energy Efficiency Directive (EED). While the impact on UK businesses is still unknown, in the EU, operators with a power consumption of 500 kW or more will be required to report their energy performance data.

To achieve this, software and advanced data analytics are vital, and at Data Centre World London, Schneider Electric will demonstrate its award-winning data centre infrastructure management (DCIM) software capabilities, including its open and vendor agnostic EcoStruxure IT platform. EcoStruxure IT Expert goes beyond incident prevention, providing advanced remote monitoring, wherever-you-go visibility,

predictive maintenance, and data-driven recommendations to mitigate security and failure risks.

The software pods at Stand D730 will showcase how Schneider Electric’s EcoStruxure Data Centre Expert, on-premises DCIM software, and its Aveva and ETAP Digital Twins platforms, are helping to shape the data centres of the future – providing a comprehensive suite of solutions to help fast-track the simulation, design, monitoring, control, optimisation, and automation of power and cooling systems, while enabling real-time visibility from anywhere.

The buildings of the future

Schneider Electric’s next-generation building management software – EcoStruxure Building Operation – is the edge-control heart of the EcoStruxure Building system, and at Data Centre World, the company will showcase how it enables enterprises to seamlessly facilitate the secure exchange of data and analysis from third-party energy, lighting, HVAC, fire safety, security, and workplace management systems while leveraging digitisation and big data.

Finally, the stand will feature Schneider Electric’s Power Monitoring Expert (PME) software, which provides insight into electrical systems health and allows users to make informed decisions that improve performance and sustainability. With its open, scalable architecture, PME connects to smart devices across businesses’ electrical systems, power and energy meters, protective relays, and circuit breakers, and integrates with process control systems to convert data into action – delivering real-time power and equipment monitoring for greater efficiency and sustainability. Join us at Stand D730 during Data Centre World London to learn how we can shape the sustainable, energy efficient, and resilient data centres of the future together.

Spring 2024 www.datacentrereview.com 13 EDIT SPONSORED FEATURE
Schneider Electric’s Vice President for UK and Ireland’s Secure Power Division Mark Yeeles will take part in a panel exploring the industry’s ESG commitments at Data Centre World on 6 March

Stopping the cloud from becoming a MONEY PIT

WSimon Ritter, Deputy CTO at Azul, explains how optimising Java-based infrastructure can save on cloud overspend and improve efficiency.

ith the cloud being the infrastructure of choice for business applications, a trend which is only anticipated to grow in this next year, CIOs are not just thinking about how they migrate to the cloud, but also how best to balance their application performance and user experience with spiralling costs.

According to our recent 2023 State of Java Survey and Report, released in October, 98% of global respondents say they use Java in software or infrastructure, and 83% state that 40% or more of their applications are Java-based. It is also clear from our report that Java plays a critical role in cloud cost optimisation – 90% of survey respondents are using Java in a cloud environment, whether it is public (48%), private (47%), or hybrid (40%).

However, businesses are not always making the best use of their cloud

infrastructures: 41% of global respondents report using less than 60% of the public cloud capacity they pay for, and 69% are paying for unused cloud capacity. They are taking steps to lower their cloud costs, as the continued market volatility means that CIOs must be judicious in how they allocate precious resources.

So, the question becomes: how do organisations prevent the cloud from becoming a money pit? This is where the concept of FinOps is becoming increasingly relevant and sophisticated as IT teams seek to monitor and adjust their cloud usage to minimise overspend.

One area where we believe this approach can be applied is in examining the performance of Java-based applications in the cloud. Why? Well, Java is everywhere in the enterprise.

The power of faster Java

I recently spoke with William Fellows, Research Director at S&P Global Market Intelligence, about the topic of cloud optimisation. Fellows shared insights from S&P’s research, which showed that managing unanticipated demand and overprovisioning more resources than are required are the two biggest reasons for cloud overspend. By his estimates, Fellows suggests that spending on the public cloud was 56% over what was necessary in 2022. Fellows and I concluded that there are ways to reduce cloud waste for Java workloads.

Spring 2024 www.datacentrereview.com 15
CLOUD COMPUTING

How to stop costs spiralling out of control

The dilemma for CIOs is that overprovisioning is something of an insurance policy for ‘just in case’ demand peaks, and the infrastructure needs the bandwidth to cope. What if more efficient optimisation of Java-based applications could have a positive impact on the need for additional resources?

Outlined below are some recommendations to help organisations improve cloud usage by optimising their Java-based infrastructure:

Faster Java

According to a May 2023 IDC report, effective management and optimisation of both costs and performance of Java applications are critical to the success of digital businesses to retain customers and grow revenues. IDC estimates that 750 million new applications will be created by 2025, many of those in Java. Enterprises need an easy way to optimise the performance and licensing of their new and existing Java applications without the burden of rewriting or recompiling them (which can take years to do).

To address the challenges of controlling costs and providing better performing, more resilient applications, enterprises should look to a Java optimising platform which can speed up the start-up of Java applications, improve scaling and runtime performance, and reduce the number of cloud resources needed for the environment. This enables the most demanding Java enterprise applications to be optimised for performance and compute consumption at the runtime level.

Addressing Java warm-up

The way that Java Virtual Machine (JVM) based applications are designed, they rely on just-in-time (JIT) compilation. The time to compile all the frequently used sections of code is called warm-up time. Historically, applications have run for days, even weeks or months, so this warm-up time has been negligible compared to the time the application runs. However, in the era of microservices-based applications which offer the ability to spin up new service instances when there is a sudden increase in workload, the time to compile can become a performance issue.

With JVM-based applications, when new service instances start, the warm-up time delays the instance’s ability to handle requests.

This leads DevOps teams to overprovision cloud systems to maintain real-time performance. They keep a pool of services running in reserve, fully warmed up and ready to use as demand changes. However, as mentioned earlier this can lead to major overspend in IT budget.

This issue with warm-up times can be addressed, alongside latency associated with garbage collection and throughput of transactions, to improve performance. For example, if you can achieve better transaction throughput you can reduce the size of the nodes in your cluster, the number of nodes, or both while still meeting Service Level Agreements (SLAs).

Compilation-as-a-Service

It is also possible to create profiles of all frequently used codes once they have been compiled and optimised. There are tools that record a profile including all the information about a performed compilation, so that when the application or service starts again, the JVM uses the profile to ensure all code is compiled and ready-to-go before the application code starts to execute.

Taking advantage of the cloud’s distributed model, it is possible to take the compilation part of the JVM and make it into a centralised service. This

The dilemma for CIOs is that overprovisioning is something of an insurance policy for ‘just in case’ demand peaks

offloads the work from the individual microservices, reducing the size of the instances that may need to be created for individual services as they do not have to do their own compilation. It is also possible to have a memory so that when code starts up it is not necessary to compile the code each time. Instead, it can be cached to make it much more efficient.

Squeezing more efficiencies out of Java

Another aspect of optimising JVM-based applications is understanding that Java is hardware and software agnostic. That means customers are not constrained to using one type of processor. For example, Azul’s technology works with Graviton processors, which offer a 40% price performance improvement over fifth generation Intel chips and deliver a 72% drop in power consumption.

We are also seeing an increase in more specialised cloud computing instances for compute-intensive tasks in fields like AI and machine learning. It is possible to optimise the JVM for these instances by using the JIT Compiler to make use of the micro-architecture instructions that are available. For example, vector processing is used to deliver higher performance for numerical-intensive applications.

Optimising your Java-based Infrastructure

Cloud adoption is moving into a new phase. Its accessibility encourages greater use, and the latest generation of tools and innovations means its potential to help organisations become more agile and responsive to customers is more attractive.

CIOs, though, know they do not have unfettered licence to adopt cloudbased applications. They must also continue to demonstrate prudence against the backdrop of market volatility.

Java is tuneable – and should be used to build systems with revenue and profit in mind. Cost optimisation must be measurable and tied to business impact. Programming languages like Java provide profiling tools to analyse code performance, and while these require setup and expertise, they enable granular analyses that can lead to changes that shave milliseconds. What may seem like small optimisations accumulate into large savings at scale.

With Java so widespread in most enterprise IT infrastructures, it presents a significant opportunity to demonstrate how IT can both drive large-scale efficiencies and help organisations to maximise the value of their cloud investments.

16 www.datacentrereview.com Spring 2024
CLOUD COMPUTING

Powered by Centiel’s Distributed Active-Redundant Architecture (DARA) for minimizing downtime 9-nines availability (99.9999999%)

StratusPower UPS goes beyond power with a commitment to peace of mind and operational excellence.

Experience the future of data centres today!

30 years design life

Built on proven semiconductor technology for increased reliability

www.centiel.com

We are the Global Partner of Choice for Data Centre Cooling

Multi-DENCO® Aria-DENCO® At FläktGroup, our goal is to support your data centre with tailored, end-to-end solutions that maximize uptime, enhance sustainability, and optimize operational efficiency Let us help you create a futureproof and environmentally responsible data centre infrastructure that exceeds your expectations. Find out more at www.flaktgroup.com

The latent power of air

FläktGroup reflects on the important role air still plays in the evolving landscape of data centre cooling.

hat does the ubiquity of virtual reality (VR) in everyday life, blockchain beyond cryptocurrency, augmented reality (AR) in retail, 3D printing in homes, and edge computing all have in common? The answer: our experience of these technologies have not matched the surrounding hype.

WThat is not to say these technologies and their uses are still not evolving. Of course they are, and we would expect to see their impact still unfolding in significant ways in the future. At this moment in time, we also should recognise there is a gap between the proposed technology and need in the real world. The same can be said about the proposed non-air related cooling systems in data centres.

The versatility of air cooling

We know that data centre loads are changing significantly. Two of the key drivers of this change are artificial intelligence (AI) and machine learning (ML). There is a widespread belief that the demanding processing power of these applications unequivocally necessitates specialised and unconventional cooling solutions, such as direct-to-chip and liquid immersion. However, this assumption, while relevant for very specific GPU-based high-performance computing tasks, does not universally apply. In reality, the vast majority of current AI/ML applications do not require such extreme measures.

Air, or more accurately, the movement of air, is so versatile that if (a small word that embodies big engineering) managed properly, is more than capable of taking on the evolving cooling needs of modern data centres today and in the future. It is true that these data centres now have, and will continue to have, diverse cooling demands of AI and ML applications. However, these tasks can easily be handled with advanced air cooling systems and methodologies. The notion that superior performance is exclusive to liquid cooling solutions in data centres is incorrect.

The overarching strategy in data centre management has always been to distribute workloads to alleviate hot spots and ensure uniform cooling at the desired set-point. This strategy has always been supported by air-based systems. The distributed workloads has become, and will continue to become more complex with AI/ML, but by thoughtful design of the data centre halls relative to anticipated workloads, facilities can maintain optimal performance without resorting to specialised liquid cooling systems.

Understanding the challenges

Success in this area relies on a deep understanding of the relationship between demand, heat and air movement. Strong global cooling solution providers, such as FläktGroup, take a holistic approach to the effective management of this three-way dynamic. By working closely with data

providers/owners and data centre designers, it becomes clear that taking advantage of the versatility of air through more innovative engineering will continue to meet the challenges of load demand complexities in data centres. These systems, marked by intelligent design and exceptional control mechanisms, will continue to handle substantial heat loads by optimising airflow and improving heat transfer efficiency.

Leonardo da Vinci once remarked “simplicity is the ultimate sophistication”. We believe it takes straightforward solutions to address complex problems, be they mechanical, financial, environmental, maintenance, or sustainability. It’s the power of simplicity that prevails.

That being said, no one knows what data centres will look like in 2040. Societal and business needs are evolving constantly and this will naturally be mirrored within the data centre environment. Therefore solutions and engineering will also evolve, so a diverse toolset will always be required to tackle the future. This article does not necessarily favour air cooling or indeed to fully commit to liquid cooling; instead, it advocates for the right tool for the right application. The question is: what is the pragmatic tool to use today that will continue to satisfy the needs of tomorrow?

Meeting the demands of the future

As a provider of cooling solutions in data centres, FläktGroup will continue to work with the industry and develop the right solutions – which includes non-air based technologies – at the right time for the right purpose. We know that rack densities are consistently increasing. However, it is clear the increase is gradual rather than in large steps, with the vast majority of racks within data centres still operating below 30 kW, which can be comfortably managed by air movement.

As we move to a more mixed environment within the data centre white space, hybrid solutions will become more prevalent and liquid will begin to support air based systems, typically for over 30 kW rack densities. This is already happening with coolant distribution units (CDU) supporting passive/active rear door cooling, as well as row-based cooling within the aisle containment. These systems form an important part of FläktGroup’s cooling solutions portfolio.

FläktGroup’s cooling technologies serve as a robust ally in the era of AI and ML. The latent power of air remains, for now at least, a stalwart guardian in the complex and evolving landscape of data centre cooling. Whether managing standard server applications or the dynamic requirements of AI-driven processes, FläktGroup is dedicated to innovation, ensuring that cooling solutions, whatever the methodology, remain efficient, effective, and environmentally responsible as the digital world, underpinned by AI and ML, transforms our physical world.

Spring 2024 www.datacentrereview.com 19
SPONSORED FEATURE

ARE YOU READY

for new regulations?

TJon Brooks at QiO Technologies explores whether the sector is prepared for increased sustainability scrutiny, and outlines some key steps for regulation compliance.

his is the year of global sustainability scrutiny for data centre owners and operators. The repercussions of not complying with new environmental standards will be significant, bringing the risk of heavy fines and the broader risk of not being investible. Businesses that plan ahead have a better chance of making the right choices to capture the data required to meet new requirements and report.

The reality is, however, that many organisations are not sufficiently prepared. Research has shown that only half of senior data centre professionals say they have suitable ways of measuring key sustainability metrics.

The impact of new regulations in Europe

Europe is leading the way with new regulations that impose new demands. The EU has already implemented comprehensive sustainability reporting with the Corporate Sustainability Reporting Directive (CSRD). Effective since January 2024, the CSRD mandates nuanced measurements covering nine areas of resource use across the whole IT stack. Previous measurements using Power Usage Effectiveness (PUE) as a proxy for data centre efficiency will no longer be enough. Beware the cost of failing to recognise this. The EU has emphasised the need for sanctions to be ‘effective, proportionate, and dissuasive’.

The EU’s Energy Efficiency Directive (EED) is also coming into force this year, with the first reporting deadline in May 2024. The EED introduces new reporting requirements for data centres above 500 kW, including environmental performance, energy consumption, power utilisation, temperature, cooling efficiency ratios, water usage, and the use of renewable energy. Data centres with energy input exceeding 1 MW must also recover their waste heat – or at least prove they can’t. This regulatory shift is expected to impact around 55,000 businesses in Europe and in supply chains beyond the continent.

Evolving regulations in the UK and US

The UK is not exempt from European sustainability directives. Businesses with securities listed on an EU-regulated market and those with a net turnover of over €150 million in the EU must comply with the CSRD. So must UK organisations, and others globally, that sit within the supply chains of EU businesses impacted by CSRD under Scope 3 emissions rules.

Additionally, the UK government intends to create UK Sustainability Disclosure Standards (UK SDS). This will cover corporate disclosures on the sustainability-related risks and opportunities that companies face. The UK government aims to create the SDS standards by July 2024.

Data centres in the UK must also

The mantra needs to be that sustainability is not part of the business plan: it is the business plan

consider the ongoing impact of the Task Force on Climate-Related Financial Disclosures (TCFD); the body created by the International Financial Stability Board to develop consistent climate-related financial risk disclosures for use by companies, banks and investors. Many jurisdictions already follow the TCFD recommendations, including Canada, the EU, Japan, New Zealand, Singapore and South Africa. The UK will mandate climate-risk disclosure in-line with the TCFD starting in 2025.

In the US, federal regulation is being driven by the Securities and Exchange Commission (SEC). The SEC’s proposed rule will standardise the way organisations make climate-related disclosures and is expected to take effect in 2024. It will likely include disclosures on climate governance, climate-related impacts and Scope 1, 2 and 3 emissions.

Simultaneously, individual US states are creating their own rules. In 2023, California signed into law the Climate Corporate Data Accountability Act (SB 253) and the

20 www.datacentrereview.com Spring 2024

Greenhouse Gases: Climate-Related Financial Risk Act (SB 261). SB 253 requires large businesses to account for the carbon footprint of all of their servers, storage, networking, UPS, HVAC, and other equipment in the data centre. As in Europe, this is a step change for most owners and operators.

There is a patchwork of regulations in other territories around the world at different stages of development, notably in Singapore and Australia. These schemes are still voluntary, and do not adhere to the stricter requirements laid out in the EU. However, as history has taught us, it’s likely these and other countries around the world will eventually follow the European lead.

How to reorganise and prepare for new regulation

Complying with new requirements is not simple. Data centres are a system of systems. Each layer can have different control systems. Plus, different transport protocols are tied to environmental systems such as power and cooling, outside temperature, water usage and waste heat reuse.

Systems thinking is needed to bring people, processes, and technology together to improve sustainability performance and deliver on new reporting demands – particularly in relation to the IT stack. Many organisations are not set up either structurally or technically to collect the data that will enable them to meet new reporting requirements.

Four steps to help you achieve regulation

compliance

1Work out what applies to your organisation

Organisations must act swiftly to build a sustainability compliance roadmap that establishes which standards they need to report to. This will mean assigning someone specifically to the task – ideally a Head of Sustainability or other business lead. Such people must have the licence and support to put together and implement a plan of action to meet obligations

2 3

Think about data collection for reporting

Accurate and relevant data will be critical for new sustainability reporting requirements. Organisations urgently need a data collection strategy. Also, devise a way of ensuring that the data is accurate and correct. Businesses cannot assume the data will be available to support reporting. This will be a significant job in its own right. The sooner organisations get to grips with the task the better.

Define the resources and tools needed to fulfil the task

Organisations will need to assess whether they are set up technically to measure the right things within their IT infrastructure, and then identify, test and iterate the improvements needed. This task requires interrogating large amounts of data, modelling use cases, and then seeing how they perform in real life.

But even if organisations could hire and deploy big teams of data scientists, the scale of the task ahead of them is simply too big. Across estates of hundreds or many thousands of servers, interrogating vast amounts of data to predict, test and measure performance in different use case scenarios (and manually evaluating each server type and build) is a mammoth undertaking. Certainly, one beyond the current capability and time-resources of internal teams.

This means data centres need to evaluate new technical products that will help them achieve their new goals. In particular, they need to investigate AI products designed specifically for data centres. These new AI products can produce real-time data on resource use (energy and water consumption) and all other relevant factors required by the new standard at multiple levels – with a particular focus directed towards the IT stack.

It’s time to get to grips with the emerging ecosystem of sustainability and climate tech suppliers that play in this space, how they fit together, and how they will support compliance targets.

4

Restructure to drive accountability and purpose

Ultimately, uncovering the right information and delivering the efficiency improvements required to meet the regulatory challenges fall under the responsibility of four key roles: facilities, finance, sustainability, and IT. Most organisations will need to change the way they are structured to enable these functions to work together effectively with a clear mandate and targets.

For everyone, the mantra needs to be that sustainability is not part of the business plan: it is the business plan.

Spring 2024 www.datacentrereview.com 21
REGULATIONS

COOLING THE

compute-intensive future

explains how adopting liquid cooling solutions can drive cost and energy efficiencies.

n recent years, energy efficiency has emerged as a pivotal concern for businesses, driven not only by the need to reduce operating costs, but also by the imperative to meet tightening sustainability goals and uphold corporate reputations. This is particularly true for data centres, which currently consume 2.5% of the UK’s total electricity. As data usage continues to rise, so will the energy used by data centres, with estimates suggesting they will consume just under 6% of the UK’s electricity by 2030.

Interestingly, only 10% of this energy consumption is used for heavy computational work. The remainder powers the facility’s continuous operation, ensuring they are running 24/7 to satisfy the demand for instant access to information. A considerable portion of this, roughly 40% of the total energy usage, is spent on cooling equipment to avoid any detrimental overheating. Consequently, enhancing efficiency in this area can yield substantial benefits for data centre operators.

It is also worth noting that the EU Code of Conduct on Data Centre Energy Efficiency has been established in an effort to reduce the environmental, economic, and supply security impacts of data centre energy consumption. While this is a voluntary initiative, from 15 March 2024, any data centre with an IT power demand of 500 kW or above will be required to report on their energy usage. There are also several countries that have mandated their own targets based on this, as well as additional benefits through the EU Taxonomy for those who can demonstrate environmentally sustainable activities.

Though this is not currently something that the UK is part of, following the trend of other EU legislative actions, it is reasonable to expect a similar initiative to be produced in the near future.

This is where liquid cooling comes into the equation, serving as an energy-efficient, cost-saving alternative to traditional cooling methods, providing data centre operators – and any other company using an IT/ server room – with an effective way to reduce power consumption.

The problem with conventional air cooling

Typically, data centres have relied on HVAC systems to cool their equipment, using a cycle of refrigerant compression and decompression to create a thermal exchange mechanism. This cooling is a critical process, as the immense heat generated by servers and computer systems can quickly lead to hardware malfunctions if not adequately managed –as evidenced by the Google server outages during the 2022 heatwave.

The issue is that these HVAC systems aren’t very efficient. They depend

22 www.datacentrereview.com Spring 2024

on electric pumps to facilitate the compression process, which are constantly running, translating into substantial energy consumption.

Air cooling is also not particularly well targeted, as it is difficult to evenly distribute cold air across the data centre. While many data centre operators have addressed this through containment strategies, reducing the amount of space that requires cooling by isolating different areas of the room does not totally fix the problem. The energy usage remains high, smaller spaces means less computing capacity, and the system is also highly susceptible to environmental factors.

Owing to these challenges, and the consistent rise in data consumption, industry-wide Power Usage Effectiveness (PUE) improvements have halted. The Uptime Institute’s most recent Annual Global Data Center survey reports an average PUE of 1.58, which is the same as it was in 2018.

As we pivot towards an era dominated by compute-intensive technologies, such as AI, the demand on data centres will only intensify. They will need more processing power, leading to increased heat generation and, as a result, a requirement for more efficient and effective cooling solutions that can reduce operational costs and help to mitigate the environmental concerns surrounding high energy usage.

How liquid cooling can help

Liquid cooling leverages the higher thermal transfer properties of water to cool equipment. Water has a higher heat capacity than air, meaning it can absorb more heat before its own temperature increases, making it a more effective medium for the job.

This process involves cool water being piped in directly through a heat exchanger behind or adjacent to the IT equipment to draw away the heat. Then, once warmed, the liquid is circulated out to a cooling device. This doesn’t necessarily need to be a device that draws large amounts of power, but instead can be a multitude of small heat exchangers, such as cooling towers or fans, or can even be achieved by running the pipes through a nearby body of water. After it has been cooled, the water is once again piped through, and the cooled air from the heat exchanger can be returned to the data centre to help maintain an ambient air temperature.

While fans can be used to help cool the liquid, in many cases, this isn’t necessary – particularly in the Northern Hemisphere. The ideal

Liquid cooling leverages the higher thermal transfer properties of water to cool equipment

temperature for server rooms is between 18 – 27°C, which means that, for a significant part of the year, the outside air is a sufficient temperature. In the UK, for example, the average temperatures over the past 20 years haven’t risen above the limit of 27°C – removing the need to cool the liquid almost entirely.

Reaping the benefits

There are many reasons for making the switch to liquid cooling. Firstly, as the process is more efficient, it drastically reduces the energy used to cool equipment. This results in significant cost savings and a reduced environmental impact, helping data centres and other IT-intensive businesses align with eco-friendly practices and manage operational expenditure.

Liquid cooling systems are also far more compact than their HVAC counterparts, freeing up valuable space that can be repurposed for additional servers or storage. This provides both a logistical and a strategic benefit, enhancing the overall capacity and scalability of data centre operations.

Precision in temperature control is another area for improvement. By directly targeting specific areas, liquid cooling ensures that each component is operating within its ideal temperature range – extending the service life and improving the performance of equipment.

There is also the potential to integrate the heat exchange with nearby facilities. The warm water from the cooling system can be channelled to heat adjacent buildings, or even used to heat swimming pools; addressing the challenge of heat disposal while contributing positively to the surrounding community.

A cooling solution for the compute-intensive future

The demand for energy-efficient solutions in data centres is rising, and liquid cooling marks a significant leap towards addressing the dual challenge of escalating data demands and tighter ESG commitments.

Not only does liquid cooling facilitate the operation of more equipment, as well as leaving valuable floor space for additional racks, but it can also greatly reduce energy usage and costs. In doing so, it provides data centre/colocation operators, and any other companies that use air conditioning for server cabinet cooling, with an effective solution to keeping up with the burgeoning demands of compute-intensive technologies, while significantly reducing their carbon footprint.

Embracing liquid cooling represents more than just a practical choice for data centres; it is a vital step towards a more energy-conscious and sustainable future.

LIQUID COOLING Spring 2024 www.datacentrereview.com 23 Go to p39 to read more about DCR’s must-attend FREE event 24 APRIL 2024 VIRTUAL EVENT 28-30 NOVEMBER 2023 VIRTUAL CONFERENCE Cooling focus in

Powering your needs with Leoch Battery

Leoch Battery explains why its sealed lead acid batteries can provide the reliable back-up power necessary for high-rated power requirements of the most demanding UPS applications.

There are very few modern businesses that can function without electricity. A power outage can rock even the most successful of firms, halting production in factories and pausing service in call centres.

It is vital to have a power supply that is reliable and can respond quickly to keep business ticking over.

Common reasons for power loss include natural phenomena such as storms, lightning strikes, tornadoes, flooding, ice or snow. Car accidents, utility problems, and fallen trees can also create unnecessary problems for businesses.

There is a requirement for a guarantee that the power supply won’t be compromised at any time, which means relying on a UPS (uninterruptible power supply) to provide a continuous supply of power to the server – whatever is going on elsewhere.

In fact, any company with a server would benefit from having a UPS system as a back-up plan, should something happen that is outside of their control.

They are particularly useful in medical and military settings where an interruption to the power supply could have catastrophic consequences.

Leoch Battery UK offers a wide range of sealed lead acid batteries specifically designed for reliable back-up power and ideally suited to satisfy the high-rate power requirements of the most demanding UPS applications. Batteries that can operate dependably and consistently to power your needs, whatever they might be.

Leoch Pure Lead Xtreme Rate batteries, the renowned PLX series, are AGM Valve-Regulated Lead-Acid front terminal batteries that have been optimised for extreme high-rate applications. Engineered using Pure Lead, PLX batteries offer a very long-life design under extreme high-rate demands and fast recharge capabilities.

The PLX series has an impressively wide operating temperature range of -40°C to +65°C and, despite this, still has a long shelf life of up to 24 months. It also has a 15-year design life.

Leoch Pure Lead Xtreme Rate batteries, are AGM Valve-Regulated Lead-Acid front terminal batteries that have been optimised for extreme high-rate applications

An ABS case UL94-HB (UL94-V0 Optional) is used for increased container strength and the product has a unique anti-explosion one-way vent valve design to minimise water loss and increase safety.

The series also offers a variety of Front Terminal and Top Terminal products, which ensure improved compatibility for replacement projects and have a very long design life, classified according to Eurobat.

Togus Tsang, Network Power Sales Manager for Leoch Battery UK Ltd, says, “The AGM battery continues to be hugely relevant in the modern world due to its experience and reliability. It can be depended upon to provide a continuous power supply in a range of temperatures and has an incomparable strength and safety.

“Providing peace of mind for business-critical systems and healthcare applications when continued power is a necessity, the Leoch UPS ranges are the second line of defence when primary power source failure occurs.

“Leoch Battery UK also offers an online Battery Calculator to find the right battery for your system. The right UPS system can be a gamechanger for your business, protecting your infrastructure and data.

“Although there is little you can do in the form of prevention when it comes to power, with a UPS system, you’re putting a measure in place to stop the situation from having a hugely detrimental effect on your business.”

For more information about the Leoch Battery UK UPS products, visit www.leochbattery.co.uk/markets/network-power/ups/

SPONSORED FEATURE Spring 2024 www.datacentrereview.com 25

Finding a new way to BETTER ENERGY USAGE

ALouis McGarry, Sales & Marketing Director at Centiel UK, explores why data centres need to better monitor and manage their energy consumption, as well as embracing renewables, to meet the power needs of the future.

According to the National Grid, the combined consumption of energy from the UK and US is set to increase by 50% by 2036 and to double by 2050.

Data centres significantly contribute to society’s overall power consumption. The largest hyperscale data centres in the world can currently consume more than 100 MW of energy, which is roughly equivalent to the same power needed to supply a US city of 160,000 people for a year. The fact is that we need to be smarter in the way we manage and use our finite resources and look at how to harness renewable energy better.

Energy costs are likely to remain high, but this is probably one of the best things to happen to drive the sustainability agenda forward if we have any chance of even getting close to the United Nations’ Sustainable Development Goals (SDGs) by 2030, a date which is now frighteningly close.

In short, data centres need to consider efficient running strategies like never before. Plugging into smart grids can offer one answer.

What is a smart grid?

A smart grid is the infrastructure to provide two-way communication between the power provider and the data centre. Intelligent communication can improve delivery and distribution of power to optimise energy efficiency.

For the data centre, the infrastructure includes all the equipment deployed and it needs to be managed and monitored effectively. This approach also offers the key to taking advantage of alternative and renewable energy sources through the smarter use of equipment and also better controlling the amount of energy burned.

Adapting existing infrastructure can often be more challenging than designing and building from scratch, as all equipment will need to be assessed, monitored and improved from the ground up. I’m not just talking about the UPS solution. Everything from the cable into the building to the load itself will all need to be considered, managed and monitored.

However, imagine the future: a data centre that discharges its energy storage banks every day and recharges with free renewable energy. Not only do the batteries support the load, but stored energy could be used to offset and run the whole data centre, ultimately achieving net zero.

This reality may not be so far away. We are currently working on a data centre project which could be completed as early as the end of 2024 which will take 50% power from the grid and 50% from batteries, with

26 www.datacentrereview.com Spring 2024
Imagine the future: a data centre that discharges its energy storage banks every day and recharges with free renewable energy

the ultimate aim of recharging through renewable sources, such as wind and solar. Peak shaving will ensure reduced energy is taken from the grid, while batteries simultaneously discharge during high rate demand.

The data centre will be able to take advantage of tax credits for this approach and, over time, the cost savings on mains electricity will be immense. Although this project is being driven by cost, it is also green –and if rolled out across the entire facility, it will contribute significantly to the data centre’s net zero path.

The key point is that the UPS must be ready to accept alternative sources of energy. It is not a case of buying any box, then plug and play. Integration with systems in the data centre is necessary, but it is all possible.

In the decades to come, it may not just be wind and solar used to power data centres. While hydrogen fuel cells are not tried and tested in our industry, they may offer potential; we have worked on at least one hydrogen cell powered UPS project already. However, it was not without its challenges and a better current alternative for many facilities is Lithium Iron Phosphate (LiFePO4) batteries.

Li-ion batteries, unlike traditional VRLA batteries, are capable of thousands of cycles but they have legitimate safety concerns. They release oxygen, which is combustible and if they do catch fire, the fire is not easily put out. However, Lithium Iron Phosphate (LiFePO4) batteries are oxygen free and are considered as safe as VRLA batteries. They share the advantages of cycling but with significantly less risk. They also tolerate higher ambient temperatures, reducing or removing the need for cooling, and their typical useful working life of 15-20 years means they only need to be replaced once in a data centre’s 30-year design life.

Total cost of ownership

Despite rising electricity costs, one of the barriers to creating the alternative, green data centre is the upfront cost in new technology investment.

Here total cost of ownership (TCO) needs to come into play and savings must be calculated over the entire life of the facility to enable informed decisions to be made. We have been pleased to see sustainability consultants regularly coming onto project teams recently to help data centres assess these costs as there are so many factors involved and the calculations can be complex. However, over the long term, savings on running costs normally outweigh the investment in capital expenditure within a few years. Inertia is starting to build and move in the right direction for a greener future.

Where business owners don’t have the luxury of building new from scratch, there are always areas for improvement within existing facilities. Here, monitoring is the most effective way to assess and reduce power consumption. Low-cost monitoring enables the load to be balanced through rightsizing, monitoring can assess the quality of a system’s performance to optimise efficiency, while preventative maintenance ensures equipment out of tolerance – which is less efficient – is repaired or replaced at the optimal time. Monitoring enables a report and plan to be created to identify where efficiency can be improved and action taken.

Data centres also need to take advantage of energy management modes within their UPS solutions. Education and training of onsite staff can help to ensure the system is optimised at all times. In a situation where the load can vary, UPS modules can be put into a ‘sleep mode’. While not switching power, their monitoring circuitry is fully operational, so they are instantaneously ready to switch power if needed. Because it is the switching of power that causes the greatest energy losses, system efficiency is significantly increased.

The alternative

Energy demand is increasing and this trend will continue with society’s growing reliance on technology, particularly with the evolution of artificial intelligence or machine learning. Yet, our finite resources are depleting, pushing energy costs ever higher and so we must look at the alternatives. As an industry which uses significant amounts of power, it is in the interests of data centres to become greener from both a sustainable and financial point of view. We need to monitor and manage the energy we have better and take action to harvest renewables.

These days, there are options and the most forward-thinking facilities are now looking at harnessing renewable energy to support the load and even power the data centre. However, creating a greener data centre requires careful research, planning and investment in the future.

Spring 2024 www.datacentrereview.com 27 ENERGY MANAGEMENT

AFL at DCW:

Join us to discuss your hyperscale data center plans for 2024 and beyond. Learn more about our innovative fiber connectivity solutions and explore how we can help you adapt and scale to meet the evolving needs of your growing network.

> Gain valuable insights.

> Unlock next-level Modular Connectivity.

> Overcome hyperscale data center challenges.

Visit us at stand D825
Scan to get your tickets now

AFL at DCW: Innovative fibre solutions

AFL will be exhibiting at Data Centre World, March 6-7, ExCel London.

With around 15,000 industry specialists and end-users gathering to network, share success stories, and witness over 100 hours of live talks (including a guest appearance from celebrity headliner Professor Brian Cox), DCW promises stand-out insights you won’t want to miss. With so many stands to visit, where should you begin?

We highly recommend making the time to visit Stand D825, where the knowledgeable AFL team is on-hand to resolve queries on all things hyperscale. Our friendly data centre experts can help you uncover innovative fibre solutions to support your next project. The highperformance solutions we will cover include:

• AI networks

With decades of expertise in high-performance networking, we thoroughly understand the critical role networks play in enabling AI at scale. Our purpose-built, end-to-end solutions help resolve even the toughest challenges that may arise as your AI/ML workloads evolve in complexity. Whether you require high-throughput transport of petabytes across continents or real-time edge inference across millions of devices, our high-density cabling solutions and advanced cable routing platforms can help enable optimised network solutions at scale.

• White space

We provide tailored white space solutions, ensuring your network remains manageable, flexible, and scalable. From upgrading your existing data centre infrastructure to architecting and implementing new deployments, our white space solutions can help future-proof your project. Choose between fibre assemblies and modular housings to suit your individual network needs, and speak to us about network planning, installation, and testing in any size data centre. Discover how we can help you build efficiency into your data centre white space architecture.

• Data centre interconnect (DCI)

Our cutting-edge, scalable DCI solutions link multiple data centres to help you build integrated, shared-resource networks. With smalldiameter, ultra-high fibre count cables, we fit more fibres into your existing duct space, preventing costly civil works. Choose between our connectorised or spliced solutions (each crafted using state-ofthe-art technology, including our innovative Wrapping Tube Cable with SpiderWeb Ribbon) to ensure optimised network connectivity designed to help balance workloads and boost productivity.

Unlock next-level modular connectivity

AFL offers a range of tailored modular connectivity solutions designed to quickly launch new deployments or seamlessly enhance the performance of your existing fibre network. Whether you require high-density, tray-based, shallow-depth, or legacy application solutions, we have

AFL recognises and deeply understands the complex and varied challenges facing modern hyperscalers

the answers you require to not only provide optimal performance but support the evolving needs of your data centre fibre network.

• U-series: Meet bandwidth requirements now and in the future with the fast, flexible U-Series fibre connectivity platform. The U-Series enables optimal network performance with high-capacity housings, easy-to-install cassettes, and high-performance assemblies.

• H-series: Designed with optimal colocation data centre performance in mind. Future-proof your colocation fibre network, speed up deployments, lower installation costs, and reduce network complexity, all while increasing revenue by leaving more space for customers.

• ASCEND: Introducing ASCEND, the high-speed, high-density, highperformance fibre connectivity platform for data centres. ASCEND housings support both incremental growth and full-scale deployments in an easy-to-use, scalable platform.

Understand and overcome hyperscale challenges

AFL recognises and deeply understands the complex and varied challenges facing modern hyperscalers. Where a one-minute network outage means over $100,000 in lost revenue (and damaged professional integrity), continuity and security become significant concerns. Likewise, access to advances in technology and dependable supply chains can impact your global and local availability. Whatever your next step, AFL is here to help you adapt and scale to the demands of your workloads. Visit Stand D825 to explore how we can assist you.

Spring 2024 www.datacentrereview.com 29 SPONSORED FEATURE

NETWORK CONNECTIVITY in the age of AI

Paul Gampe, Chief Technology Officer at Console Connect, explores the complex relationship between generative AI and network connectivity.

rtificial intelligence has become a game-changing technology, widely regarded as the most significant breakthrough since the advent of the internet. Over the past three months, a summit on AI safety was attended by the British Prime Minister, Rishi Sunak, and Elon Musk to discuss how this technology will transform our workplace.

AA joint investment of over £25 billion was announced by tech giants Google, Amazon and Microsoft, to expand their cloud capacity for generative AI – a sum that is expected to accelerate in 2024. In 2023, Collins Dictionary chose AI as the ‘word of the year’. As businesses confront the awesome potential of how to deploy generative AI tools across their operations, it is becoming all too apparent that many companies have been forced to rethink their network management systems.

The reason for this is that generative AI requires vast amounts of computing power and data-crunching to perform effectively, and it demands fast and flexible network solutions that can keep up with the rapid advancements of its technological capabilities.

Harnessing the capabilities of generative AI

Numerous industries have already capitalised greatly by adopting generative AI technology within their network management systems. And the results have been remarkable.

For instance, the banking sector has successfully implemented generative AI to create not just responsive chatbots but also intelligent virtual assistants. These have streamlined customer-facing operations by providing seamless interactions with personalised and conversational responses.

Additionally, these virtual assistants pour through masses of banking data on an hourly basis and then execute automated tasks such as fund transfers, monthly payments, financial history tracking, and even new account onboarding – all without human assistance.

Similarly, in the e-commerce sector, large online retail firms have experienced a radical shift in the management of their product description generation, whereby the exhaustive process of manual inputting has been replaced by dynamic automation. Their generative AI tools can efficiently handle vast amounts of customer data in real-time. This not only facilitates the generation of informative content but also enables personalised recommendations and predictive analysis of future buying habits, leveraging user preferences and search behaviour patterns.

Generative AI tools have also been used as predictive trading algorithms in the global stock market. These algorithms can process massive amounts of data to forecast investment decisions with great precision. This allows portfolio managers to predict market shifts before they occur and make informed investment decisions accordingly.

30 www.datacentrereview.com Spring 2024
You should first consider a thorough analysis of your current network ecosystem and determine whether it has the capabilities to ensure seamless and augmented workflows

Analysing your network infrastructure

Generative AI offers incredible benefits to businesses, benefits which will remain beyond reach if your business network infrastructure is unable to store, process, and retrieve the massive data sets which generative AI feeds on.

So, before you consider which generative AI tool or platform to use, you should first consider a thorough analysis of your current network ecosystem and determine whether it has the capabilities to ensure seamless and augmented workflows. For instance, does your network have the edge computing capabilities to process IoT data and deliver realtime quality insights? Can you watch and audit how your generative AI tools are interacting with your network?

Another essential factor is making sure your network is secure. Most forward-thinking businesses are now operating in a multi-cloud environment, where they are pulling in data from a variety of public and private clouds. This certainly improves efficiency, but it can also lead to disaster if the underlying network is insecure. Your data integrity could easily be compromised without a private, safe, and dedicated connection between your various data pools and the generative AI models processing that data.

A third factor is the rapid progression of generative AI technology. Your network needs to be quick and agile to support future AI advancements, which means having the flexibility to scale or upgrade on demand to support not only your business growth, but also prevent the onslaught of cyberattacks that will inevitably follow as generative AI evolves.

Let’s not forget the human factor. Incorporating generative AI successfully involves training your IT teams to utilise its full potential and deploy it efficiently throughout your network. It is essential to provide your team with regular upskilling to help them keep up with the dynamic nature of generative AI. This will ensure that your network capabilities remain equally adaptable and dynamic over time.

A simple solution

It’s not uncommon for business leaders to seek a straightforward solution that simplifies their understanding of the intricate and ever-changing nature of generative AI and its impact on their network systems. Luckily,

such a solution already exists – and it’s called Network as a Service (NaaS).

By outsourcing the operating, maintenance, and upgrading of an entire network to a trusted service provider, businesses can tailor a network infrastructure specific to their connectivity needs and pay for it with a subscription-based or flexible consumption model.

For example, an online global retailer will likely require different network connectivity requirements than a manufacturer which needs to connect between facilities and its headquarters. While these connectivity requirements differ from business to business, it’s important to choose a NaaS provider that has the power and agility to adjust to generative AI workloads.

Firstly, if you are going to be accessing different data sources, then your NaaS platform must offer fast and efficient private network connectivity on a global scale. As well as this you should be looking for one that is interconnected with all the major hyperscale cloud providers and can also extend your reach to hundreds of cloud on-ramps worldwide. Better still, if the platform will deliver an assured quality of service if it owns the underlying network infrastructure.

Furthermore, your NaaS provider must meet the security and data compliance regulations that have been implemented across geographies in 2023. Examples of these include the EU’s AI Act, the UAE’s Data Protection Law, and India’s Digital Personal Data Protection Act. It is likely that similar policies and further regulations will surely follow in 2024.

In addition to this, your NaaS provider must have the flexibility and agility to deliver fully automated switching and routing on demand. This will allow you to access unlimited data pools and easily integrate them into their generative AI models for high-performance data processing. This flexibility ensures that you only pay for what you use, reducing unnecessary costs.

One of the major benefits of NaaS is the ability to provide easy network management and maintenance. By choosing a NaaS provider that offers a comprehensive portal for all your needs, which is managed 24/7 by a team of skilled engineers, you can be assured that network issues will be resolved quickly and effectively. Issues such as transit delay, packet loss, jitter and lagging will be easily resolved, allowing business owners to focus on deploying generative AI models safely across their infrastructure and grow their business in a hassle-free manner.

Spring 2024 www.datacentrereview.com 31 NETWORKING
Elon Musk recently attended an AI safety summit held by British Prime Minister Rishi Sunak PHOTO CREDITS: SEAN AIDAN CALDERBANK / PHOTOSINCE / SHUTTERSTOCK.COM

low-carbon

Powering a AI revolution

long-duration energy storage (LDES) could be key in enabling the AI revolution.

Last year was a monumental year for AI. However, less remarked upon is the significant energy that will be required to enable this new technology. Deloitte recently found that over a quarter of UK adults had already used generative AI technologies.

AI technologies are demonstrating astonishing performance, but with astonishing energy needs which are often overlooked. While the human brain is incredibly energy efficient with a consumption of 20 W that is akin to a light bulb, AI chips are extremely power hungry in comparison.

Gartner estimates that over 80% of enterprises will consistently use AI by 2026 and AI demand for energy could lead to a world where data processing consumes over 20% of global energy supply. For context, the most popular AI chip currently, Nvidia’s A100, requires around 400 W of power for each instance and as technology advances, so do the power requirements. The top-performing Nvidia H100 AI chip consumes 700 W of power, which is equivalent to a standard microwave oven – and tens of thousands of these will be in use in the UK in the next few years.

Amidst the climate crisis, the crucial debate is how to sustainably meet the energy needs of rapidly growing AI data centres with carbonfree power.

Addressing energy needs

AI’s forward march is unlikely to slow. To give more context on coming demand, according to the International Energy Agency (IEA), today, data centres alone account for over 1% of global electricity usage, with an additional 1.14% used in data transmission.

The data centre industry is expected to grow at a compound annual growth rate of 4.95%, with some estimates suggesting that annual electricity demand for information and communication technology could grow to as much as 8,000 TWh by 2030, equating to 20.9% of projected global electric demand.

In addition to direct energy consumption, the cooling systems required to enable server operations are responsible for approximately 35% of a data centre’s carbon emissions. The power used to support these

32 www.datacentrereview.com Spring 2024

cooling systems alongside the day-to-day running of data centres has meant they now contribute more to global carbon dioxide levels than the aviation industry.

Effects of growing data centre deployment

The increased deployment of data centres to meet AI demand has major potential implications for the climate.

Fortunately, major data centre providers are already committing to carbon neutrality and clean energy. For example, Google Cloud has committed to carbon-free operations by 2030, while Digital Realty has a global renewable coverage of 62% across its data centres.

Others will need to follow suit if an AI-driven climate crisis is to be avoided. New clean energy technologies are now available that allow AI data centres to be powered by clean wind and solar energy 24/7, eliminating the potential carbon impacts of this sector while providing resilient, reliable power.

Energy storage as the stabiliser

Solar and wind are not only the cleanest, but are now the cheapest forms of new generation capacity. However, their inherent intermittency poses challenges for facilities such as data centres that require 24/7 operation.

New long-duration energy storage (LDES) technologies have the capacity to store up to 12 hours of electricity to be dispatched when needed to provide consistent, reliable power to data centre applications even when the sun isn’t shining or wind isn’t blowing.

Mass deployment of LDES could ensure consistent supply of green energy to power data centres and reduce the impact the increase of electricity consumption will have on the grid.

That new technologies exist to solve the problem is the good news. However, existing regulations and bureaucracy are impeding the rapid deployment of the clean energy technologies needed to ensure that clean energy will power the AI demands of tomorrow. For example, the UK grid is facing delays of up to a 15-year wait for grid connectivity, presenting significant headwinds to new projects and technologies.

Mass deployment of LDES could ensure consistent supply of green energy to power data centres and reduce the impact the increase of electricity consumption will have on the grid

Need for new grid architectures

While we wait for new generation to come online, data centre energy demand is already having an impact on legacy energy systems, with global implications. Last year, the boss of a Norwegian arms company blamed the storage of cat videos for having an adverse effect on his organisation’s production of munitions for Ukraine. The TikTok data centre located near the munitions factory was drawing vast quantities of electricity from the grid directly impacting on the factory’s production.

While grid-scale projects may take years to come online, another alternative is taking shape. The electricity grid, which historically has relied upon large central generation sources and the constant balancing of supply and demand, has changed little since its inauguration over 100 years ago. With new energy storage technologies come options to design new architectures – such as microgrids, that can balance the unpredictability of renewables with the stability needs of AI.

Microgrids powering data centres would both buffer local grids from increased demand while ensuring reliable, resilient power regardless of surrounding grid conditions.

The silver lining to growing energy demand for AI data centres is their potential to drive rapid innovation and scale for new clean energy technologies. With demand from data centres for microgrids with LDES, data centres could quickly change from a climate liability into a climate opportunity, accelerating not only the advancement of computing technology, but the deployment of clean energy.

BATTERY STORAGE Spring 2024 www.datacentrereview.com 33
SPONSORS 2024 DATA CENTRE REVIEW 2024 Join us for an exclusive gala dinner held at Christ Church near Spitalfields Market on 16 May 2024, where winners of each category will be announced. BOOK YOUR TICKETS OR TABLE Visit awards.electricalreview.co.uk
SMART, SUSTAINABLE, SCALABLE POWER FOR YOUR DATA CENTRE Reliable power for a sustainable world 98.1% ULTRA-HIGH EFFICIENCY

Where is the power?

DCA – Energy Efficiency & Standards Steering Committee and MD of Carbon3IT Ltd, discusses the discrepancies of data centre energy consumption reporting, and the future power needs of our rapidly expanding sector.

How much energy did the UK commercial data centre sector use in 2021 – 2023?

I’ll answer that later.

We’re all seeing, on a fairly regular basis, announcements about new large 100 MW plus data centres being built in the UK; Slough, Kent, Havering, Newport, and Manchester are some examples from 2023 alone.

We know from government announcements that the UK is set to be an ‘AI powerhouse’ with millions of investment promised. But there is a fundamental problem: access to power.

For a recent presentation, I took the liberty of checking the National Grid: Live website. It details the UK’s generation and source, the demand, the interconnectors (power we buy from our European neighbours) and storage. I checked the data on 30 November 2023 at 5:40pm and 1 December at 12:35pm, and got the below data: 30

1 December at 12:35pm

I

3

Spring 2024 www.datacentrereview.com 37 EDIT
November 2023 at 5:40pm
3
looked at the site again whilst writing this article on
Jan 2024, and got the below data.
Jan 2024 at 3:40pm
FINAL SAY

There are a couple of items worthy of attention. First up the price: for 30 November, the price was £92/MWh, on 1 December, it was £172.51/MWh and on 3 January, it was £41.04/MWh. But what is more concerning to me is that on 30 November, 1.5 GW were drawn from the interconnectors, followed by 4.0 GW on 1 December and 6.9 GW on 3 January – and this was 14.4% of demand.

In essence, we do not have sufficient generation capacity in the UK right now and are relying on our European partners to keep the lights on.

So, we must have a plan to be self-sufficient in energy – don’t we?

Relying on interconnectors

Unfortunately, it appears not. The only new power stations in the pipeline are Hinkley Point C and Sizewell C. The former is not due to start generating until 2036, and the latter is still seeking planning permission, so perhaps by 2040. We will continue to draw energy from interconnectors for some time yet, and this is some of our energy spend going overseas instead of being invested in the UK.

You may also be aware of the National Grids ‘Great Grid Upgrade’ project. In their own words,“The Great Grid Upgrade is the largest overhaul of the electricity grid in generations. Our infrastructure projects across England and Wales are helping to connect more renewable energy to your homes and businesses.” The upgrade is reputed to be costing around £54 billion.

I fail to see how, given these twin problems, these proposed data centres can obtain sufficient power for their projected needs – if anyone can tell me, I’d appreciate it!

To answer the question posed at the start of this article – what was the energy consumption of the UK commercial data centre sector in the period 2021-2023? This data by the way is from the fifth reporting period of the data centre Climate Change Agreement (CCA), where commercial operators can obtain a reduction in the climate change levy in return for energy efficiency improvements based on PUE reductions. This scheme does not apply to enterprise data centres.

So, the CCA target period 5 indicates that the total CO2 was 4,293,041.38 tonnes, using a grid factor of 0.3405, and the energy consumption was 12.60 TWh, or around 4% of the UK total consumption. Remember that this is just for commercial data centres, so it excludes some telco, mobile phone, and enterprise data centres.

The actual total data centre estate energy consumption, adding ‘distributed IT’, could be far higher than anyone expected. We estimate it to be actually in the region of 40 TWh, or around 12% of total consumption.

The truth is that no one really knows what the true energy consumption is, because data centres can mean different things to different people.

Finding the truth

So how do we get a true and accurate number?

It depends on your definition of what a data centre is. I prefer the EU Code of Conduct for Data Centres (Energy Efficiency) (EUCOC) definition, which is “For the purposes of the Code of Conduct, the term ‘data centres’ includes all buildings, facilities and rooms which contain enterprise servers, server communication equipment, cooling equipment and power equipment, and provide some form of data service (e.g. large scale mission critical facilities all the way down to small server rooms located in office buildings).”

Now, over in the EU, the recently revised Energy Efficiency Directive has some elements that cover data centres, and their definition (although

No one really knows what the true energy consumption is, because data centres can mean different things to different people

this may be temporary) is the same as the EUCOC but with a lower limit of 500 kW capacity. From 24 May 2024, all data centres will need to report total energy consumption, the amount (if any) derived from renewable energy sources, water use, and any waste heat reuse –although the method of reporting is still being discussed.

In addition, there is a whole load of ‘site-specific’ information that also needs to be reported; more information on that will become available once the directive is fully published.

So, once every data centre in the EU has published their energy and other data, the EC will have excellent visibility of exactly how much energy data centres above 500 kw capacity are using. My personal view is that the final number will be significant and will result in further measures to reduce consumption being taken.

To conclude, the UK is now experiencing the ‘energy gap’ that some commentators have been discussing for the past 15 years; plans to address the gap are mediocre, patchy and expensive.

Data centre energy consumption is woefully underreported but is improving in the EU; we await to see if the UK government will implement something similar.

38 www.datacentrereview.com Spring 2024 EDIT FINAL SAY

Cooling focus in

24 APRIL 2024

VIRTUAL CONFERENCE

Exploring emerging challenges in data centre cooling

Critical Insight: Cooling in Focus will be a detailed exploration into how data centre cooling is evolving to meet new challenges, such as the emergence of AI, the increasing push for sustainability, and rising global temperatures.

This half-day virtual event will bring together experts and key industry players to debate, connect, and share innovations and ideas, providing a platform to help the industry navigate its way through a rapidly evolving environment.

Don’t miss this in-depth look at the present and future of data centre cooling, and join us on the 24th April 2024.

AGENDA

9:00 AM - 09:05 AM: Welcome and introduction

09:05 AM - 09:35 AM: Keynote

Data centre cooling: An era of change

An overview of the trends shaping data centre cooling, and what it means for the industry at large.

09:35 AM - 10:05 AM: Presentation

Cooling data centres sustainably

With the increasing push for carbon neutrality, how can we make cooling more sustainable?

10:05 AM - 10:20 AM: Break

10:20 AM - 10:50 AM: Presentation

Cooling from A to Z

A session exploring the pros and cons of the cooling systems and technologies currently available and their best application.

10:50 AM – 11:30 AM: Panel

Cooling flexibly

A panel discussion exploring how to approach cooling challenges that arise with edge and modular facilities.

11:30 AM - 12:00 PM: Presentation

Data centre cooling: best practices

A practical look at best practices to optimise cooling at legacy sites.

12:00 PM – 12:40 PM: Panel

Meeting the challenges of high-density compute

A session delving into the impact of trends like AI and HPC on data centre cooling and how they will shape the future of cooling.

12:40 PM – 1.10PM: Fireside chat

The role of automation in data centre cooling

A session exploring how automation can optimise data centre cooling and reduce operational costs.

1:10PM: End of event

Register at critical-insight.co.uk

VIRTUAL EVENT 28-30 NOVEMBER 2023

Deploy your data centre with less risk using EcoStruxureTM Data Centre solutions.

EcoStruxure™ for Data Centre delivers efficiency, performance, and predictability.

• Rules-based designs accelerate the deployment of your micro, row, pod, or modular data centres

• Lifecycle services drive continuous performance

• Cloud-based management and services help maintain uptime and manage alarms

Discover how to optimise performance with our EcoStruxure Data Centre solution. Meet us at our stand D730 at Data Centre World London to find out more.

se.com/datacentre FASTER
©2022 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998_20645938
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.