DCR Q2 2023

Page 1

Colocation & Outsourcing Moving from colo to hybrid Green IT & Sustainability Designing planetfriendly software Edge Computing Real-time analytics at the edge 12 24 34 Q2 2023 www.datacentrereview.com
Reliable power for a sustainable world Maximum Protection. Minimal Footprint. Save space & reduce operating costs with our latest generation UPS.
10 30 20 22 34 News 04 • Editor’s Comment It’s not easy being green. 06 • News Latest news from the sector. Features 10 • Edge Computing StorMagic’s Bruce Kornfeld explores the definition of edge and shares some predictions of where the edge is headed over the next decade. 20 • Green IT & Sustainability PCL Construction’s Danny Horton asks, could retrofitting legacy data centres help meet increasing demand while providing environmental benefits? 30 • Storage, Servers & Hardware Tony Hollingsbee at Kingston Technology explains the role of SSDs in a sustainable data centre model. 32 • Colocation & Outsourcing DE-CIX’s Ivo Ivanov looks at how data centre operators can create competitive advantages through interconnection services. Regulars 36 • Industry Insight Tate Cantrell, CTO at Verne Global, explains the importance of transparent reporting for data centre sustainability. 38 • Products Innovations worth watching. Contents

Editor’s Comment

It’s not easy being green

Hello and welcome back to another issue of DCR Magazine.

It’s taken a while to shake off the cold this year, but as the sun starts to make more frequent appearances, we’re reminded that summer is not too far away. That also means periods of potentially extreme weather – at least in terms of what we’re used to in the UK – are on the horizon, and naturally, thoughts turn once again to climate change.

Going green is an incredibly important issue – both for data centres and the world at large – but it’s also a nuanced one. Actually enacting change to become carbon neutral takes time, money and planning, and can’t be rushed or half-baked – but at the same time, the pressure is on to act now to stop irreversible damage. It’s something companies need to prioritise in all of their operations, and not just because customers and consumers are becoming more sustainably conscious. This isn’t a box ticking exercise.

It’s a question we’ve asked before at DCR and it’s one we’re still asking: are we making measurable changes to the way we operate in the data centre sector, or is it all just part of the sustainability spin machine?

We take a closer look at this in our Q2 issue – some of the ways things are changing for the greener, and what we could be doing better.

Also on the horizon are the ER & DCR Excellence Awards on 18th May. Set to take place at the beautiful Christ Church in Spitalfields, this event celebrates the best and the brightest of projects, products and people in the data centre and electrical industries. So if you want to take a look at the shortlist or find out how to buy tickets for the evening, head on over to awards.electricalreview.co.uk

As always, feel free to get in touch with me at kayleigh@ datacentrereview.com, and come and join us on Twitter (@ dcrmagazine) and on LinkedIn (Data Centre Review).



Kayleigh Hutchins



Jordan O’Brien



Rob Castles robc@sjpbusinessmedia.com


Abdul Hayes

+44 (0) 207 0622534 abdulh@sjpbusinessmedia.com


Sunny Nehru

+44 (0) 207 062 2539 sunnyn@sjpbusinessmedia.com


Fidi Neophytou

+44 (0) 7741 911302 fidin@sjpbusinessmedia.com


Wayne Darroch


Paid subscription enquiries: subscriptions@electricalreview.co.uk

SJP Business Media Room 2.13, 27 Clements Lane London, EC4N 7AE

Subscription rates: UK £221 per year, Overseas £262

Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register

Electrical Review is published by Room 2.13, 27 Clements Lane London, EC4N 7AE 0207 062 2526

Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media

ISSN 0013-4384 – All editorial contents © SJP Business Media Follow

Kayleigh Hutchins,
us on Twitter @DCRmagazine
us on LinkedIn 4 www.datacentrereview.com Q2 2023





High Efficiency Modules 15MW, 20MW and 40MW

Microgrid Ready

H2, HVO, Bio Fuels Ready

Mobile power blocks

6MW and 16.5MW



Vantage Data Centers has formed an investment consortium to support the company’s expansion of its EMEA platform.

The investment partnership has been formed with a group of investors led by funds affiliated with the investment management platform of DigitalBridge, MEAG and Infranity.

The transaction values the partnership, which is related to certain stabilised European assets, at roughly €2.5 billion, including Vantage’s stake.

Additional capital raised by the consortium will be used to support the continued growth of Vantage’s EMEA data centre platform.

The partnership is expected to be completed in the second quarter of 2023, subject to certain closing conditions.

UK gov to support parents and carers return to STEM careers

Agovernment-funded programme has been unveiled to help parents and carers in the Midlands and the North of England return to STEM industries after a career break.

The free-of-charge STEM ReCharge initiative, launched and funded by the Government Equality Hub, will provide sector-specific return to work coaching.

Delivered by Women Returners and STEM Returners, the initiative will offer mentoring, job skills training and upskilling to prospective returners with tech or engineering experience.

The STEM ReCharge programme will address both practical and psychological barriers to returning to work, and aim to rebuild confidence, assist in writing back to work CVs and hone interview skills.

STEM ReCharge is being rolled out in the Midlands and the North of England after analysis by Women Returners and STEM Returners suggested these areas have fewer returner programmes than the south.

AWS announces switch to HVO fuel in Europe

Amazon Web Services (AWS) has said it will be making the switch to hydrogenated vegetable oil (HVO) from diesel to power the backup generators used in its European data centres.

According to a statement by the company, the transition to HVO began in January 2023, with its sites in Ireland and Sweden the first to make the switch.

HVO can be made from waste cooking oil, or vegetable, plant, and residue oils, and is a renewable and biodegradable fuel that can reduce greenhouse gas emissions by up to 90% over the fuel’s lifecycle when compared with diesel.

In order for AWS to switch all of its European sites to HVO, the company has said it is investing in the procurement of HVO that only comes from renewable sources and is working to develop a global supply chain of the fuel.

Neil Morris, Director of Infrastructure Operations, Northern Europe, at AWS said: “At AWS, we’re committed to and invested in sustainability because it’s a win all around – it’s good for the planet, for business, for our customers, and for our communities.

“Transitioning to HVO is just one of the many ways we’re improving the sustainability of our data centres, decarbonising our operations, and working towards Amazon’s companywide goal to meet net zero carbon by 2040, ten years ahead of the Paris Agreement.

“By making this commitment to using sustainably-sourced HVO at our data centres sites, we hope to pave the way for other businesses, and help establish a global supply chain that will accelerate change across Europe, working in collaboration with other organisations.”

latest highlights from all corners of the tech industry.
6 www.datacentrereview.com Q2 2023


Schneider Electric has announced the appointment of Mark Yeeles as the new Vice President of its Secure Power division in the UK and Ireland.

Yeeles will succeed Marc Garner, who will now lead the Secure Power Division in Europe. He will be working with Schneider’s data centre and IT channel partners to address the challenges associated with data centre sustainability, efficiency, energy security and resilience.

“I’m honoured to have been appointed the new VP for the Secure Power division in the UK and Ireland, and look forward to working with our customers to solve their challenges in the wake of the energy crisis,” said Yeeles.

“I believe that ecosystem collaboration is vital to help this sector address its environmental impact, and that by harnessing the power of software and digitalisation to unlock new efficiencies and minimise energy waste, we will ensure data centres play a key role in a more resilient and sustainable future.”

Start-up uses data centre to heat public swimming pool

British start-up Deep Green is using waste heat produced by a small, on-site edge data centre to provide free heat to a public swimming pool.

Deep Green’s ‘digital boiler’ is a cloud data centre that uses immersion cooling technology to capture the heat produced by its servers and transfer it into a site’s existing hot water system, free of charge.

According to Deep Green, around 96% of the heat generated by its digital boiler is recycled. Exmouth Leisure Centre in Devon is the first swimming pool in the UK to make use of the technology, with the heat supplied by Deep Green predicted to reduce the pool’s gas requirements by 62% and lower its carbon emissions by 25.8 tonnes.

The installation at Exmouth Leisure Centre contains 12 servers and has the capacity to support cloud services, artificial intelligence, machine learning and video rendering.

Deep Green said there are potentially over 1,500 pools in England alone that could benefit

from the tech, with energy costs for leisure facilities increasing 150% since 2019.

Mark Bjornsgaard, CEO of Deep Green, said, “By moving data centres from industrial warehouses into the hearts of communities, our ‘digital boilers’ put waste heat to good use, saving local businesses thousands of pounds on energy bills and reducing their carbon footprint. Pools are just the start and around 30% of all industrial and commercial heat needs could be provided by this technology.

“Organisations that are serious about supporting society and reducing their carbon emissions should not forget the massive impact of their computing needs. Deep Green now provides an answer.”

Peter Gilpin, CEO of LED Community Leisure (operator of Exmouth Leisure Centre), added, “Deep Green’s innovative technology will dramatically reduce our energy bills and carbon footprint, meaning we will continue to be a key asset for the local community.”

The project in Exmouth will be followed by similar installations in Bristol and Manchester.

ITV and Channel 4 to move to the cloud with BT

ITV and Channel 4 will move their terrestrial Freeview channels to the cloud via BT’s smart broadcast network, Vena.

As part of a multi-million-pound contract between BT and D3&4 – ITV and Channel 4’s joint venture company – BT will switch on cloud processing for regional TV content across all of D3&4’s Freeview channels.

Vena will give broadcasters access to applications that sit on top of software-defined media networks. D3&4 will use Vena to deliver digital coding and multiplexing, combining multiple content streams into one before distribution regionally. This will involve taking content from six playout centres and distributing it across a variety of UK regions, via two of BT’s data centres.

Vena will also utilise Ateme’s virtualised software video processing and delivery solutions.

According to a statement by BT, this is the first time that cloud processing of digital terrestrial TV channels has been completed on this scale, and Vena will boost efficiency, lower energy consumption, cut costs and enhance flexibility – without interruption to viewers.

NEWS Q2 2023 www.datacentrereview.com 7

Resilient and ready for anything

Lex Coors, Chief Data Centre Technology and Engineering Officer at Digital Realty, explores the ways in which the company is prepared to navigate change.

The last 12 months have been challenging for consumers, as well as businesses. We are all facing inflationary pressures, rising energy costs and potential shortages, adverse climate events and a volatile geopolitical landscape. This creates uncertainty, which is hard to navigate.

For data centres, which have become the central nervous system of society and the global economy, it’s essential that we’re equipped to deal with this uncertainty, in all its forms. Not only to support our own customers, but to support society as a whole. Data is now the invisible fabric that holds our digital economy and society together and there’s growing awareness of its importance. Many governments recognise data centres as critical infrastructure, in acknowledgement of our role in powering daily life all over the world, enabling everything from healthcare provision, to e-commerce to financial transactions, to delivering the entertainment we consume daily.

Be prepared

As a global data centre provider, it’s in our DNA to be resilient to risk in all its forms, from cybersecurity through to climate change and energy shortages. Our preparedness also helps to ensure that the critical digital services that society relies on, such as emergency services, are available when needed most. For example, when it comes to dealing with potential problems like energy blackouts, we prepare for the worstcase scenario. We have back-up generators in place at every one of our facilities globally, which are equipped to power our operations for hours – even days – if required. We also have established service level agreements with our fuel suppliers that stipulate priority delivery of fuel. So, whenever we think it’s likely that an energy blackout could occur, we are able to stage

fuel trucks nearby to prevent any downtime. Crucially, our efforts at preparedness benefit other homes and businesses. By being able to operate our data centres independently from the grid during times of grid stress, we free up grid power for other customers who need it most. For example, last year in California, during an extended heatwave and wildfire season, our data centres were called on to proactively switch to backup power at several critical time periods when the state’s electric grid was at the limits of its ability to provide reliable power. By removing our sizeable load from the grid, we freed up enough utility power to keep lights and air conditioning on for more than 48,000 homes during this critical time.

We’re also constantly looking for renewable

data centre that’s prone to every kind of disaster – from earthquakes to floods to landslides – and design, build and operate accordingly. This ensures that our facilities are resilient and can cope with almost anything – from extreme weather conditions to energy grid disruptions. In turn, this means that we keep our global network of data centres running, so key public infrastructure, like the systems used by hospitals and transportation networks, can continue to operate.

We have a dedicated risk assessment team that works to determine the best location for a site. Our team looks at a variety of factors, from the business opportunity, to whether the site is elevated enough to avoid flood risk and can withstand other extremes, like an

alternatives to diesel to power our backup generators. In France, for example, we have switched to low-carbon fuel, known as HVO100 (hydrotreated vegetable oil) to fuel our generators, which is made from waste organic materials rather than petroleum, has up to a 90% lower lifecycle carbon footprint and is sustainably certified. We’re also trialling this in other markets, such as Spain, in the hope of rolling it out more widely across the business when the supply becomes more readily available.

Be resilient

More broadly, our resiliency planning begins at the start of the data centre build. We imagine a

earthquake. They also evaluate proximity to hazardous chemical plants and flight paths. Our decisions on where to build data centres are made based on these findings, and we only build in locations when we’re confident we can offer maximum resiliency.

Once a data centre is up and running, there are several things we do to ensure it remains operational. For one, we have contingency plans to provide power if needed, and two, we monitor weather and climate events and use a situational awareness system to manage and react to threats. In the event of problems with the stability of the power grid, we can switch to our backup power systems to keep our data

8 www.datacentrereview.com Q2 2023
By being able to operate our data centres independently from the grid during times of grid stress and outages, we free up grid power for other customers who need it most

centres online.

While fortunately rare, there have been times when extreme conditions have called for us to implement resiliency measures to ensure that we stay operational for our customers. When Storm Uri blew into Texas in February 2021, crippling the state’s power grid and causing blackouts for homes and businesses, we were prepared and drew on our emergency power systems (including our diesel reserves and robust resupply networks) to maintain 100% uptime for critical digital services. We also redirected excess fuel supplies to our customers in Texas for use at their other non-Digital Realty properties.

Be ready for anything

In the future, we’ll continue to work closely with governments and energy suppliers around the world to ensure data centres continue to be recognised and prioritised as critical infrastructure to the economy and society. Our commitments to sustainability and energy efficiency will also be a key driver of innovation as we look to reduce our reliance on fossil fuels, harness sustainable resources and participate more and more in the circular economy.

We’re proud of the work we’re already doing, whether it’s reaching one gigawatt of certified IT capacity or matching the power we use

across Europe with 100% renewable energy. However, we know we need to go further, push the boundaries, and accelerate our efforts to reduce our climate impact.

We all rely on a digital society and the digital economy whether we know it or not. It touches nearly every aspect of our daily lives and facilitates the functioning of our modern society. We can never do away with risk or uncertainty. But we can do our best to plan for and anticipate risk to ensure we are as resilient as possible and provide certainty to our customers and consumers in an increasingly uncertain world.

SPONSORED FEATURE Q2 2023 www.datacentrereview.com 9

Defining moment


While edge computing has many definitions, deploying IT systems at the edge is a fast-growing trend with considerable market impact (>$100 billion). The pendulum is swinging back hard and fast – from mainframes to client/server to the cloud and now back to edge. Edge computing will never completely displace the cloud or corporate data centres, but it will significantly impact how IT teams think about and architect their systems. To understand why, let us look at the market, edge definitions and near-term predictions for edge computing.

Why defining ‘the edge’ became cloudy

Data production at the edge continues to grow exponentially as applications increasingly must be run locally. Incredible volumes of data are being created at the edge from energy and manufacturing to retail and healthcare, to first responders and ‘smart cities’, as well as data from cameras, digital sensors, POS systems and a host of other IoT devices.

To improve customer experiences, efficiencies, and profits, data must be processed and analysed where it is created, which for these industries is the edge. This is challenging for organisations and enterprises relying on cloud strategies and architectures, as cost, latency and reliability issues abound in cloud implementations, let alone the growing regulatory constraints on data location and movement.

With increasing Twitter chatter from vendors, analysts, and media, it is no wonder there is confusion around defining edge. Business and IT leaders have spent the last decade thinking of and implementing ‘cloud first’ strategies. Some hesitate to make edge infrastructure investments. Despite the discord, we know what is true about the edge and what will continue to be debated.

The truths about the edge

According to Gartner, “edge computing is part of a distributed computing topology where information processing is located close to the edge, where things and people produce or consume that information.” The edge is any location outside the data centre or cloud where an organisation needs to run applications locally to minimise the need to process data in a remote data centre. The edge could be an oil rig in the Atlantic Ocean, a manufacturer’s 10 operations sites or a retail store chain’s 10,000 locations.

Most edge sites need cloud or corporate data centre connectivity to access certain services. Still, many organisations avoid deploying compute and storage at the edge, only to find that depending on cloud services is expensive, does not allow for real-time application processing, and risks internet outages and application downtime. Some cloud enthusiasts are realising they are spending unnecessarily with a ‘cloud first’ strategy.

10 www.datacentrereview.com Q2 2023 EDGE COMPUTING
, Chief Marketing and Product Officer of
explores the definition of edge and shares some predictions of where the edge is headed over the next decade.

However, managing, using and protecting data at the edge is difficult as most tools are designed for the data centre or cloud. IT managers struggle to classify which edge data must remain there, be archived in the cloud, backed up to a data centre or deleted altogether. These challenges pigeonhole decision-makers into expensive cloud-led application, data and management options rather than exploring flexible and futureproofed strategies that address edge data’s specific parameters and uses.

What the edge is not

Edge and IoT are frequently mistaken for each other, which clouds their distinction. IoT is actually an edge use case or an edge subset. The same goes for the conflation of cloud and edge. Some business leaders misconceive that to deploy an edge site, they should employ the same strategy as they do with the cloud.

Colocation facilities (colos), which businesses rent for servers and other computing hardware, are not the edge. Colos can have hundreds or thousands of servers, and petabytes of storage. They really are remote data centres, but owners/providers of such facilities sometimes promote themselves as ‘edge data centres’ to leverage the ‘trendy’ term. Yet if a

retailer tried to use one of these ‘edge data centres,’ they would get the same result as running apps in the cloud – with the same problems described above.

Problems with the edge

The edge presents challenges for IT departments that try to build infrastructure to run applications, store data, and analyse it at these small and sometimes remote sites. Most infrastructure hardware and software were designed with the cloud and large data centres in mind. These solutions don’t work at the edge due to the cost and complexity required to provide a 100% uptime environment to run all local applications.

Edge sites are small and don’t have the space, power and cooling needed for data centre-class hardware. Most edge sites look for smallfootprint servers and storage – typically some type of hyperconverged solution. It has been difficult and expensive to deploy edge computing infrastructure that provides 100% uptime and the performance needed to run all local applications. However, many vendors have solved this problem and most integrators and VARs now have tools to help endusers do just that.

Another major problem with edge computing is complexity. These sites often lack IT staff to perform maintenance or management. Since edge computing systems are typically critical enterprise infrastructure, organisations must acknowledge the complexity and seek solutions that are simple to install and operate and be confident in the knowledge that all edge sites can be managed remotely, from a single pane of glass.

What will the edge look like in the future?

In the next five years, we expect to see edge problems addressed through industry innovations around the way applications and data are deployed and managed. Organisations today want simple-to-use hardware, software and management tools designed specifically for the edge so they can run their applications and analyse their data in real-time, but no such tool exists in the market, forcing enterprises to pay for more comprehensive systems that are overkill.

Edge applications will be increasingly container-dependent, which can save an organisation by eliminating the expensive ‘hypervisor tax.’ Additionally, data services will need to improve. Today data gets created, stored and usually backed up and some of it is sent to the cloud or a data centre for further processing and analysis. As organisations’ need to take action on data at the edge continues to grow, storage systems must evolve to support this requirement.

In the next 10 years, edge computing will have more data and processing than the cloud, and the edge will be the prominent location in which data and applications are managed and processed. Innovation will happen faster at the edge than in the cloud. This shift will galvanise a change in server design, storage capabilities and management software, with products becoming smaller, faster and easier to manage. The cloud won’t go away, but will not be used heavily for edge computing use cases.

The right definition of edge today

Though the debate around the latest or true definition of the edge will likely continue, one might argue the right definition of edge for today is as follows: edge is anywhere outside the data centre or cloud where an organisation needs to run applications locally to avoid the cost, latency and reliability risks of doing everything in the cloud.

Q2 2023 www.datacentrereview.com 11 EDGE COMPUTING
Edge computing will have more data and processing than the cloud, and the edge will be the prominent location in which data and applications are managed and processed

The future is edge


Europe, Persistent Systems, explores what edge computing and data analytics mean for the future of tech.

There’s no escaping it: in today’s world, data is the foundation of every modern business. Whether it is a huge tech conglomerate or a small independent shop, every organisation creates and collates reams of data every day, if not every second.

With so much data being generated, the key to success is an organisation being able to utilise this data to the best of its ability.

With new cloud-based data management systems accelerating innovation, areas like edge computing and edge data analytics are becoming more prolific – helping companies achieve faster insights, improved efficiency, better security, and generally pushing the boundaries of what the cloud made possible.

Utilising the cloud

The cloud has undoubtedly been the single biggest enabler for data analytics innovation in the modern day, with the majority of organisations leaning on cloud computing as a key component of their digital transformation journey.

For many, leveraging the cloud means using it to help manage the reams of data they are collating and creating, but this approach has its pitfalls. There’s often a common misunderstanding that moving to the cloud means moving all data to the cloud – but, depending on your individual business requirements, this is not always needed.

Challenges around transferring data and the speed at which this happens means that organisations must consider what is absolutely necessary to move to the cloud for them, to avoid any errors, confusion, or compromises in quality. And although it’s quicker and more efficient at data transfer than the internal data centres of days gone by, for many organisations a remote infrastructure run on the cloud doesn’t cut it anymore.

Technology on the edge

For some businesses, the remote infrastructure of the cloud is simply not fast enough in transferring data from point A to point B. This is where edge computing and edge data analytics come into play.

And although it may sound very much like just another buzzword,

those in the know understand that the word ‘edge’ is actually incredibly helpful in understanding this technology, as it refers to a literal geographic location. Edge computing is computing that is done near or at the data source – it is literally on the edge.

My favourite analogy for this is – if using the standard cloud is the equivalent of sitting in a restaurant while the chef cooks your meal in the kitchen, using edge computing is the equivalent of sitting at the chef’s table. You’re not fully in the action but you’re on the periphery, as close to it as possible.

With nearly every modern business having the ambition to be ‘data-

12 www.datacentrereview.com Q2 2023

led’, edge computing is offering a new way to process vast amounts of data in real-time without the lag experienced with the cloud.

To stick with my earlier analogy, edge computing cuts out the waiter; your food goes straight from the chef to you in as short-a-distance as possible and it’s this speed that is leading to huge technological advancements.

Innovations powered by edge

Let’s use Tesla as an example. The car company is re-writing the rule book when it comes to what’s possible in automobiles, thanks to the use of edge

computing and data processing.

With each car collecting vast amounts of data in real-time, the most efficient way to analyse and action the information is within the vehicle, at the point of collection. While using a remote cloud would only take fractions of a second longer than using edge computing, removing this lag is crucial in the advancement of technologies like automatic braking – those milliseconds can be the difference between an autonomous car colliding with a pedestrian or not.

It’s real-time analytics at the edge like this, that is helping to take ideas like autonomous cars out of science fiction and into reality.

The benefits

Using edge computing and edge analytics has allowed organisations to let the data model lead the process, rather than analysing the data in order to build the model, and this switch in thinking brings with it many benefits.

Having the information processing happening on the edge, where data is created and consumed, can bring huge efficiency gains, improve security, and accelerate data flows to achieve true real-time data analytics.

It’s an important development for the financial sector, for example, where a delay in data transfer of just milliseconds can have a great, and costly, impact on trading algorithms. Speedy data transfer is also paramount in areas like healthcare, where data delays can impact life-ordeath decisions.

On top of the benefit of speed that edge computing and data analytics offer, they also have the benefit of protecting systems from malfunction and hacks due to the fact that they decentralise the data process.

If all autonomous cars, for example, were hooked up to a centralised system it would be a very real concern that someone could hack that system and gain control of every car at once. Using edge computing, however, keeps the data localised so it’s less vulnerable to a malfunction – intentional or not – as the issues are localised to that single car, not the entire network.

Edge in the mainstream

The key to being able to utilise edge computing and advanced data analytics as they filter down into the mainstream is having the right data engineering in place. This allows businesses to ensure that the data they collate, process, and consume, is of the right quality and in the right format for teams to use.

Implementing DataOps can help with this as it can improve how data is processed and allow data science teams to begin building automation models to do a lot of the heavy lifting.

Ultimately, edge computing, data analytics, and cloud innovations are of no use if the data they rely on is not in a usable format to begin with.

If mainstream businesses want to see the benefits of better insights, true ‘real-time’ data and consequently the better informed and more agile decision-making that comes with technological advancements like edge computing, then they can’t skip the hygiene factor.

Any investment in technology should align with a long-term business strategy or it will only lead to problems down the line. For example, we are now seeing a lot of companies reduce the data they have stored on the cloud, as the initial knee-jerk reaction to achieve digital transformation by putting everything on the cloud has proved inefficient and costly.

Some businesses and industries can benefit greatly from edge computing, but only if they plan and prepare for success.

Q2 2023 www.datacentrereview.com 13
Edge computing, data analytics, and cloud innovations are of no use if the data they rely on is not in a usable format to begin with

The middle ground

14 www.datacentrereview.com Q2 2023

Edge computing is a very exciting topic for most of us. It is the promise that a whole new set of connected devices will deliver brand new services and customer experiences. In the modern, digital world in which we find ourselves, connectivity has to be excellent where people live, work, and interact with each other.

The idea of edge computing is to partly move data storage and processing from large data centre facilities, closer to the edge of the local network and to the end-users. In the local network, the lowest latency is necessary for modern devices and apps to deliver their promises. If the latency is not adequate, the user experience disappoints and end customers get frustrated quickly.

This could jeopardise the business fundamentals behind edge computing: investment in such a massive digital infrastructure is justified by the assumption that the number of users equipped with IoT compliant devices living, working and interacting in city centres will grow exponentially.

What edge computing looks to do is to interconnect devices, horizontally. The more people you have in a given area, the more interconnections you can make, and the more distributed data storage and computing power are. I am not an engineer myself but an IT expert explained that the same social network app running on your phone would rely on the phones present around you to store and process the data necessary to your own phone.

ESG and other challenges around traditional edge

Sharing and connecting – these are words that are very much at the core of ESG.

Being fully ESG compliant is not a promise that owners and operators of large data centres outside the larger cities in the FLAP D markets –Frankfurt, London, Amsterdam, Paris, and Dublin – can always make. The intent to be ESG compliant is there, and companies in this current climate will be hard pressed to not ignore it. Companies have, so far, put in place lots of very good initiatives to be fully ESG compliant as soon as possible.

But they also have bigger fish to fry at the moment: for example, some governments or parliaments restrict access to the grid or force data centres and operators to be concentrated in specific geographical areas. Meanwhile, land is becoming a scarce resource, rooftop leases for the installation of

masts are terminated at an increased rate by landlords, and the only valid alternative so far is to install 25m high monopoles in the streets, and 5G networks are implemented but the broadband capacity is still often lagging.

Building an edge network in the largest cities will take ever larger data centres to provide massive back up and computing power to support the local edge network, perfect connectivity through towers, masts and subterranean dark fibre networks, and a dedicated, reliable source of power. The building of a performant digital infrastructure in the FLAP D market is happening at a high pace, but we still have to overcome some serious and significant challenges to make this happen.

Medium edge

In the meantime, others have begun exploring different approaches to edge, with some choosing to bet on a middle ground: medium edge.

In Sweden, according to sources, a new data centre company is going to be announced soon with the business objective of becoming a regional edge data centre platform, connecting not only larger cities, but also the residents of middle-sized cities.

That company aims to set a new benchmark in terms of ESG attributes. Its ambition is to become the most circular and ESG aligned data centre operator in Sweden by minimising its environmental footprint both during its construction phase and its operations, but also maximising the utility of its waste heat.

From a social and community perspective, this project is presented as a catalyst of positive change by making the economically challenged area where it is based a place where low-income residents are offered free IT software, and where the municipality can attract businesses keen to take advantage of extreme connectivity.

This project can be seen as an example of an alternative, yet complementary approach to edge computing where connectivity is exponentially increased inside major cities, with a sometimes debatable ESG footprint.

Conceptually, medium edge is quite aligned with the traditional concept. If you zoom out from the larger cities, a whole country or region becomes the geographical area where the horizontal connections can now be made, reducing the pressure on location for data centre infrastructure.

Challenges around medium edge

Medium edge is an interesting idea on paper, but it will have to overcome some significant challenges as well. There may be more land available and potentially supportive local municipalities, however access to power might be problematic in more remote areas.

Politically and socially, not unlike the debates which took place when wind turbines started to appear in the countryside, there are some loud voices raising against the data centre industry among the farming community.

In the Netherlands, attracted by the convergence of European internet cables, temperate climates, and an abundance of green energy, some hyperscalers are attempting to establish large footprints in the countryside, but there is local resistance to these projects by impacted parties, such as farmers, at a time when they themselves are under increasing pressure to meet their own ESG targets.

Medium edge will have its own problems, undoubtedly, however for those people living and working on the edge of the edge, interconnections are crucial.

Q2 2023 www.datacentrereview.com 15
, Partner at Eversheds Sutherland, explores whether edge computing is the key to a digital revolution, and whether ‘medium edge’ could be the way forward.
What edge computing looks to do is to interconnect devices, horizontally

Great expectations

Can edge computing deliver on its promises in 2023 and beyond, asks Chris Harris, Vice President, Global Field Engineering at Couchbase.

Despite its name, edge computing isn’t left field. In fact, it’s rapidly growing in popularity, with more businesses exploring use cases for the approach. According to Gartner, there will be a 75% increase in edge computing and processing by 2025, while IDC estimates worldwide spending on edge computing will reach nearly $274 billion by 2025.

And with demand for innovative user experiences and high performance only increasing, it makes sense for organisations to process and analyse data close to where it is created – via edge computing. This drives productivity and cost benefits.

The future looks bright with edge computing – but can it live up to its billing?

How did we get here?

Whether it’s financial services, retail or hospitality – digital transformation has given end-users innovative ways to engage with organisations. Customers now expect slick user experiences powered by failsafe applications, and organisations that fail to provide this are likely

16 www.datacentrereview.com Q2 2023 EDGE COMPUTING

to lose out to the competition.

This rise in user expectations has coincided with the vast increase in connected IoT (Internet of Things) enabled devices. Smartphones, wearables, and even connected cars act as digital touchpoints for organisations – collecting and exchanging data in real-time and providing important user feedback to providers. They are also incredibly popular, as IDC estimates that by 2025, there will be 55.7 billion connected IoT devices in existence, generating almost 80 billion zettabytes (ZB) of data.

With more digital services on offer, fed by more devices than ever, organisations are dealing with unprecedented levels of data. This data is crucial to understanding customer journeys, trends, and issues – which all help organisations to improve services and drive growth. But only if it can be processed, stored and accessed efficiently.

Passing this ever-increasing amount of data back and forth between these touchpoints and the business core results in skyrocketing bandwidth costs and increases the possibility of disruption from latency and network outages. This is where edge computing comes in.

Gaining the edge

Edge computing is a distributed computing model, where data processing and storage is carried out on the periphery of the network, closer to

where it’s required. This minimises the need to send it back and forth to a centralised server or cloud, reducing bandwidth usage and minimising latency issues.

Crucially, it allows for much faster decision making than a more traditional centralised computing model can afford, making all the difference when it’s applied to time-sensitive services or applications. This move towards decentralised decision making is exciting for organisations.

Early models of edge computing still required tasks to be carried out by a centralised cloud server, which limited its capability. But this is now changing.

More edge devices, and the database software embedded in them, are now powerful enough to process data locally without a central cloud server needed. As a result, enterprises can seamlessly spin up new business-critical edge deployments capable of working offline. If a server is required, it can be located much closer to these devices, at the network edge. And if devices must interact with a centralised cloud server, innovations in serverless computing mean organisations only pay for what they need, when they need it.

These developing edge capabilities are driving true innovation in edge computing – finally realising the huge potential in the technology.

Taking the next step

For Edge 2.0 to become a reality, organisations must plan their deployments carefully, ensuring they invest in the right kind of database technologies – from the cloud data centre to the edge layer, and finally, edge devices themselves. As far as connectivity permits, data should be synchronised across these edge devices, and they must also be able to process data and deliver real-time insights offline, if needed. Reaching this point will demand the near-instant, resilient, secure and bidirectional synchronisation of data between cloud and edge data centres.

This kind of technology is already adding value for organisations in all sectors. Take the mobile healthcare provider BackpackEMR, for instance, which seeks to improve the medical experience in rural communities of the developing world. Frequently without an internet connection, these clinics can’t use traditional cloud services. But they can share patient data seamlessly across their edge computing nodes, raising the quality of the services they offer.

Such use cases represent real benefits for organisations. In 2023, more businesses will strive for this decentralised approach to improve reliability and support for high bandwidth, low-latency applications. If edge computing delivers this, and fundamentally changes user experiences for the better, it’s safe to say that this approach will more than live up to expectations.

Q2 2023 www.datacentrereview.com 17
If devices must interact with a centralised cloud server, innovations in serverless computing mean organisations only pay for what they need, when they need it

Powering on

What are the significant developments happening in the sector today?

[AC]: Today, data is at the base of our professional and private lives. Every one of us is processing huge amounts of data daily, stored onto millions of servers dispersed across the globe. And we are just at the beginning of the digital transition journey.

In the future, our society will become more and more data driven, and this simply means that the amount of data will exponentially grow and the need for a secure and fast data management process (storage, backup, processing, fast transmission) will become even more important.

If we just look at the vast amount of opportunities data will enable (AI, the Metaverse, augmented reality, autonomous systems operations, etc.), we can understand that we will, in the future, need an incredible

number of physical infrastructures where we can store and process our data.

Thus, the data centre’s ecosystem will continue to evolve: technical and regional requirements will become more stringent; new architectures may rapidly appear to enable a more efficient usage of resources; power demands per data centre build may just continue to grow.

What are the challenges ahead?

[AC]: The data centre industry faces an extremely vast number of challenges starting from the conceptual and planning phase, moving to erection and construction, and ending up with an operational life cycle. As I’m representing the supply chain side of the business, and specifically Riello UPS, I’d like to focus on the topic of secure electrical power distribution.

As anticipated, data centre infrastructure has become extremely power hungry. Today,

it is quite normal to see a data centre building exceeding tens of MW of installed power. Because of this, data centres have fallen under the spotlight of public institutions and utilities operators, given the challenges of making available these large amounts of electrical power.

Sustainability has in fact become the major challenge for humanity. Just a couple of years ago, we had over 150 countries signing the Paris Agreement, supporting the reduction of global CO2 emissions, with the ambition to become carbon neutral by 2030.

On top of this, the data centre industry is facing tough headwinds, including supply chain difficulties and geopolitical instability, which makes the business scenario even more complex; leading to a profound energy crisis and inflation. All of these translated into a strong increase of the kWh cost, over the last two years, by a factor of >2.0.

So the key question we at Riello UPS are trying to address and resolve is: “How can we help the

Dr. Antonio Coccia, VP Business Development, Data Centre Solutions at Riello UPS, gives DCR insight into some of the challenges facing the data centre sector, as well as Riello’s new Multi Power UPS product line.
18 www.datacentrereview.com Q2 2023

data centre industry become more sustainable and, at the same time, minimise the operational cost?”

What does the sector need to be wary of in terms of current supply chain issues?

[AC]: Despite the challenges, there is no sign currently of an industry slow-down. DTC operators will pay even more attention while planning the infrastructure and selecting the equipment to achieve all relevant metrics – but overall, the industry speed will not decrease in the next four to five years. Instead, even more data centres are expected to appear across the globe. Most likely, competition among the different operators will push the development cycles further to the edge.

On the contrary, as previously noted, the overall manufacturing industry is suffering from both supply chain shortages and increased cost of materials. Today the general trends show lead times for UPS systems up to six to seven months if the UPS units must be equipped with lithium storage. If the customers can’t wait that long, the cost of manufacturing increases due to the fact that unplanned material must be expressly produced and delivered.

Under these high-pressure circumstances, we must also expect that the quality of the equipment produced may be suffering, with an impact on the security of operation for data centres.

How can we properly manage this push-pull situation? There are a couple of dimensions which can be considered: better harmonising the demand (of the operators) versus supply capacity (of the vendors); involving the vendors of critical equipment at an early stage; shifting the production model from a manufacturing to order versus a manufacturing to stock model, which allows faster reaction time to customers’ requests.

This last one is a fundamental pillar of the Riello UPS team, as our management has

guaranteed a combination of these two production models to better satisfy customers’ requests.

What is Riello’s approach to developing new UPS technology?

[AC]: Our approach is tailored as such that each one of our new product families responds to a ‘5S’ design philosophy, where the five Ss stand for: secure, smart, scalable, sustainable and serviceable. But the major battlefield for future innovation will be the central ‘3S’ dimensions: smart, scalable and sustainable –where ‘smart’ features enable an autonomous asset operation (advanced monitoring, diagnostic, prognostic, etc.); ‘scalable’ features allow for easy system level expansion to satisfy the fast deployment cycles of data centre operators; and ‘sustainability’ addresses the equipment energy efficiency.

What was the inspiration behind the development of the Multi Power UPS?

[AC]: With our newly launched Multi Power2/ Multi Power Scalable Platform (an easy scalable system ranging from 500kW to 1600kW per block – parallelable, of course) we have clearly demonstrated that sustainability and energy efficiency are the most important values to us. With our new product families, we reach up to 98.1% efficiency on system level and in double conversion efficiency. This ultra-high energy efficiency enables a multitude of benefits for the operators, from energy bill reduction to achieving sustainability metrics.

What did the R&D process for Multi Power look like?

[AC]: This was a three-year investment plan for us, extremely challenging but at the same time, looking at where we are today, very rewarding! The exercise was multidimensional. We

started by analysing all base research functional areas (investigating system architectures, semiconductor technology, passive components, cooling, embedded systems, auxiliaries, connectivity, and advanced control algorithms) and we completed the multidimensional exercise looking at the manufacturing disciplines (component homologation, design to cost, serviceability, validation). We had to calibrate each one of these areas.

What makes Multi Power different from other products on the market?

[AC]: I am not afraid to say that the Multi Power2/Multi Power Scalable families are defining a new standard in the critical power protection industry. We have introduced a high-power product family showing the highest efficiency ever on the market and at the same time, maximising power density, minimising the total cost of ownership, and enhancing smart functionalities.

What is Riello’s advice to owners and operators when it comes to choosing a UPS?

[AC]: It is necessary to properly evaluate the following:

1. installation conditions;

2. energy storage requirements;

3. energy efficiency level (the higher the energy efficiency, the lower is the energy bill and the cooling equipment requirements);

4. the equipment total cost of ownership (including energy efficiency, spare parts, service management, consumable components replacement strategy). Then select a reliable partner with an excellent track record in managing SLA (service level agreements) and quality from all perspectives (product quality, processes quality and customer care quality).

What are the next steps for Riello UPS in the data centre industry?

[AC]: We are working on a strategy of three pillars: a strong focus on commercial attitude (starting from sales, to project operations and service support, to properly satisfying our global customers); continue to firmly invest in research and innovation, as the instrument to provide answers to future challenges; and further expand manufacturing capabilities. On top of this, always provide our customers a true 360 degree ‘Made in Italy’ experience.

COMPANY SPOTLIGHT Q2 2023 www.datacentrereview.com 19

Building on the legacy

Could retrofitting legacy data centres help us meet ever-increasing demand while providing lasting benefits to the environment?

Current trends in increasing data generation are driving power density and placing stringent demands on data centres. With the world’s insatiable digital need, it’s estimated the number of online devices will reach 29 billion in 2030, and the volume of generated data will reach 175 zettabytes by 2025. As a result of our data driven needs, data centres consume between 1-3% of global electricity, creating growing urgency and opportunity to make data centres more sustainable and mitigate environmental impact.

Retrofitting existing assets

Demand for increasing data centre infrastructure requires scalable and quick-to-deploy results, and for many data centre clients, retrofitting is the solution. Retrofitting is the process of improving, upgrading or renovating existing facilities, and adding new equipment or expanding capacities to meet changing demands and technologies. Significant investment in the retrofitting process is anticipated to help reduce emissions from new builds or demolitions. Across the board, all sizes of upgrades and expansions are now as important to our market as greenfield builds.

We know that sustainability, efficiency, and optimisation are key determining factors for owners choosing the retrofit route to meet the changing market demands. Retrofits of mission critical facilities fall into one of two categories. The first is expanding or renovating an existing data centre. The second is converting a building designed for an alternative use into a data centre. Both have substantial environmental benefits and cost efficiencies for clients.

While there still will be a concentrated effort around prefabrication, alternate fuel sources and delivery methods to reduce the environmental impact, clients undertaking the retrofit process will be ahead of the sustainability curve as the retrofit process reduces greenhouse gases by

up to 75% and increases the lifespan of the facility. Not only do retrofits help achieve sustainability efforts, but also offer a huge advantage in time savings. With the average data centre taking between 18 and 24 months to build, a retrofit can go through design and construction, creating a production environment in less than a year. Permits for retrofits often take less time to obtain than a full new greenfield project, expediting speed to market. As an industry, there is an approximate 30% time saving when clients choose to go the retrofit route – mainly due to site development and utility savings. This time saving is the largest contributing factor for the success of the retrofit process.

Since it’s not always possible or cost-effective to build new facilities, and since space is often at a premium, making the most of existing floorspace is often a necessity. Legacy data centres have become prime targets for complete retrofits. With the shell and core of the building already constructed, data centre clients can greatly reduce their time to market by focusing on efficient mechanical and electrical systems and network optimisation, which creates substantial energy savings through the elimination of redundant server equipment.

Facility modernisation

Cost effective and energy reducing, retrofitting creates merits when upgrading data centre infrastructure, electrical equipment, cooling optimisation, and server upgrades and virtualisation. Using an integrated approach, these facilities undergo significant design and engineering work coupled with critical facilities commissioning to meet new requirements and standards replacing ineffective operating systems.

Typical upgrade and refresh cycles to modernise data facilities occur roughly every five years with modernisation including server virtualisation, improving cooling systems to lower PUE, optimising air flow, use of an energy-efficient UPS, PDUs and generators, and upgrading facility monitoring and controls. These upgrade cycles are happening in significant

20 www.datacentrereview.com Q2 2023 GREEN IT & SUSTAINABILITY

numbers – more than we’ve ever seen before as an industry – due to the facilities constructed over the past decade that now require modernisation to align with technological changes and digital advancements driving efficiency. The modernisation process is technology driven, as changing out legacy servers to optimise the electrical and mechanical infrastructure ensures the highest level of sustainability.

A retrofit design and construction team should always incorporate flexibility and redundancy into the design to ensure technological and structural expansion requirements are met in the future. Professionals with experience in live facilities are critical to a retrofit team, applying their knowledge to mitigate potential challenges during an upgrade and to align with MEP, facilities, and operations teams. When working in a live production facility, the challenge of negotiating risks to uptime and operations also requires an experienced authority that knows all phases of a commissioning plan, including installation, start-up, integration of existing and new infrastructure coupled with hardware and software testing.

Adopting innovative technologies

With a building retrofit, contractors can meet targets for reduced emissions, embodied carbon and energy consumption. Understanding how a data centre performs is key to identifying ways to improve it. As efficiencies are identified, the retrofit will encompass major modernisation for optimum performance with sustainability and carbon footprint considerations.

Knowing that up to 50% of power used in a data centre goes to cooling is important and can drive introduction of new cooling technologies. A new approach to cooling systems creates longer technology lifespan and has a significant impact on the bottom line and reducing carbon emissions.

All data centre cooling technologies impact power consumption. Several options can lead to a more sustainable practice. We know now that servers can operate at higher ambient temperatures than we’ve run them in the past, and this is a quick and easy way to cut energy and improve PUE. Another option is the use of liquid cooling that directly removes heat from servers through immersion systems using boiled synthetic liquid. This liquid turns into vapour on contact with the machinery, removing heat and travelling as bubbles to the surface where the gas turns back into liquid that repeats the cycle. Technologies that change the way cold air is used, conserve energy and also lower PUE.

Significant energy savings and network optimisation can also occur within data centre retrofits when converting from a legacy data centre layout to hot and cold aisle containment systems. The energy-efficient layout of hot and cold aisles manages airflow to lower cooling costs and conserves energy as cold air intakes run one way and hot air exhausts run the other. This process eliminates hot spots and significantly reduces risk of equipment failure through regulation of equipment temperature. Since the aisles are isolated, air temperatures don’t mix and heating, HVAC and ventilation systems operate in the most energy conservative way.

The need for data centres is only going to increase to support global digital consumption, and as it does, owners are prioritising retrofitting and changing technologies for efficiency and sustainability. As an industry, the retrofit approach is one way to meet the ever-increasing demand, keep energy costs in check, minimise disruptions to uptime during construction, and create a flexible and cost-effective approach with lasting benefits to the environment.

Q2 2023 www.datacentrereview.com 21 GREEN IT & SUSTAINABILITY
Understanding how a data centre performs is key to identifying ways to improve it

Getting with the programme

How can green IT practices help companies meet organisational ESG goals? Prashant Ketkar, Chief Technology and Product Officer at Alludo, gives his insight.

From creating, implementing and running new eco-friendly operations, to elevating the importance of diversity and inclusion, and even taking part in wider climate change initiatives – organisations are making it clear to investors, regulators, staff, and customers that environmental, social and governance programmes are a key priority in 2023.

Over the coming years, there will be more and more companies that include energy-efficient and low-emission operations in their corporate strategy and restructure their operations. At the same time, organisations are increasing their use of cloud services, with many creating multi-

22 www.datacentrereview.com Q2 2023 GREEN IT & SUSTAINABILITY

cloud and hybrid cloud environments as part of their ongoing digital transformations. The timing is serendipitous because cloud providers that prioritise energy efficiency or renewable energy sources can help enterprises achieve their ESG goals.

Another contributor to ESG elevation is the advancement in hybrid and remote work. Thanks to this phenomenon, organisations can downsize and redesign their office space, thereby additionally reducing their carbon footprint. By taking such measures, they are fulfilling their sense of duty to their employees and the environment. In offering flexible working options, they improve the work-life balance and relieve the burden on the planet with their sustainable and resource-saving commitment.

But where do you even start with ESG programmes?

Cue the cloud!

Cloudy with a chance of ESG

Migrating data to the cloud plays a crucial role in reducing energy consumption in companies. This is because cloud servers enable rapid access to system resources such as data storage, databases, and software. As a result, cloud-based data management and reporting can help enhance ESG efforts by automating processes and standardising the data. This provides increased transparency within the company as executives seek to better understand diverse social and environmental risks.

A major benefit of cloud technology is the proliferation of having multiple data sources stored in a modernised architectural format. Consequently, it helps organisations reduce siloed working, encourages knowledge-sharing, and creates the blueprint source for data while also moving physical data centres to the cloud – thus reducing emissions. Another benefit of using the cloud would be the increased efficiency for businesses; having timely access to reportable data would increase transparency for stakeholders and regulators, and allow businesses to focus on their day-to-day operations.

Additionally, they can rethink their own hardware equipment. Many companies have low utilisation rates because they have purchased additional equipment in anticipation of server load spikes. With servers in the cloud, on the other hand, hardware utilisation is consolidated, so they can be operated with high efficiency.

Overall, cloud computing is expected to save billions of tons of CO2 emissions by 2024, as large cloud storage centres manage power demand and cooling power more effectively and deploy more energy-efficient servers.

Efficient cooling processes

Cooling systems account for about 40% of a data centre’s energy resources. On the other hand, air and liquid cooling systems can reduce power consumption while protecting temperature-sensitive equipment.

Most computer room air conditioning (CRAC) systems use standard fans with speed that cannot be adjusted to match the data centre’s heat load. The sustainable alternative is variable speed systems. This is because they only consume power during operation and determine their speed via innovative room temperature analysis. A low CPU load on the servers therefore simultaneously reduces power consumption by up to 20% when using variable-speed fans.

Liquid cooling is more effective than fans. Here, the cooling liquid is directed from one component to another through a closed system of

hoses to cool the systems. The pumps required for this use less energy than fan systems in conventional CRAC/CRAH units. Solutions vary from heat exchangers at the back of racks to direct-to-chip cooling. While liquid cooling is significantly more expensive to purchase than common air-cooling systems, it also operates much more efficiently and conserves resources. In addition, it enables better space utilisation in the data centre.

More recently, a public swimming pool in Devon started utilising the heat produced by a nearby data centre to reduce its increasing expenses. The computers within the centre are surrounded by oil that captures the heat generated by the machines. This heat is then transferred into a heat exchanger, used to warm the water in the pool. This ‘digital boiler’ will dramatically reduce this public pool’s energy bills and carbon footprint. Very nifty!

Office space

In an ideal world we would have all contacted Mystic Meg so that she could have predicted the massive shift to remote work caused by the Covid-19 pandemic – but that’s not how life works.

Today, most organisations have switched to remote working to avoid any potential risk to their employees’ health and wellbeing. But remote work also had an adverse effect on company culture and interpersonal relationships within teams – so most organisations are looking for a safe, collaborative option that could help them restore team spirit and still offer the benefits of remote work. By now, most companies have the necessary technologies in place, such as a virtual desktop infrastructure (VDI). This allows all employees to access desktops, applications, and services from home as well as the office, benefiting from less commuting and more time for family.

Companies are thus able to rethink office space needs and, depending on the concept, can choose a smaller office, reconfigure space for other purposes, or set up locations completely remotely. Downsizing office space not only reduces energy costs for lighting, heating and air conditioning. It also reduces CO2 emissions for employees who have to commute less.

What’s next for organisational ESG?

ESG is constantly evolving. Therefore, it is clear a lot of work needs to be done on an ongoing basis for it to be held to the same standard as the quality of data in other more advanced areas of the business.

However, thanks to the increased emphasis from governments and businesses working together, there are many opportunities to exploit. Despite its challenges, harnessing state of the art technology and utilising cloud capabilities can allow organisations to experience vast benefits and possibly see the shift to a greener, more ethical way of operating their business.

Cloud-based data management and reporting can help enhance ESG efforts by automating processes and standardising the data
Q2 2023 www.datacentrereview.com 23 GREEN IT & SUSTAINABILITY

Sustainable software

In 2023, it’s vital for organisations to develop sustainable ways of working. ESG targets are not only front of mind for businesses – but consumers are also increasingly opting for companies who align with their eco-conscious values.

Some carbon reduction methods are obvious – like increasing renewable energy, or moving to electric vehicles when possible – but one of the biggest contributors to an organisation’s carbon emissions is their technology stack.

The cloud revolutionised the way companies stored, shared data, and ultimately how they did business, but from a sustainability point of view, minimising its impact is still a work in progress. Research shows that the cloud now has a greater carbon footprint than the entire aviation industry. A single data storage centre can require the same amount of energy as 50,000 homes.

This kind of carbon output is unsustainable. It’s clear that something has to change, and as the world becomes more and more reliant on the web, that change must begin with software, digital and technical delivery teams. Businesses need to be thinking right down to the design of their software when it comes to delivering on sustainability. By embedding sustainability into design principles, it’s possible to reduce the carbon cost of web applications, drastically improving the energy efficiency of the web. Small changes can make a big difference. Here’s how to get started:

Move to the public cloud

The location of a business’ applications can have a profound effect on the overall environmental impact of their infrastructure. Onpremise software, which is installed on a company’s own servers and

hosted locally, can be as much as 93% less energy efficient than cloud computing, according to a study from Microsoft.

As well as being more flexible, and notably more affordable, cloud software is stored and managed by a third-party provider, often requiring less physical hardware to run large workloads, or to store large quantities of data.

Additionally, higher utilisation rates are achieved as hardware doesn’t sit idle. The higher the workload per server, the better the energy efficiency, with less wastage and a lower carbon production per customer.

Choose a renewable energy data centre

Not all data centres are built the same. The energy required to operate a data centre is massive – there are the obvious energy costs of running server equipment, of course, but data centres also require impressive cooling systems to keep servers from overheating.

In fact, it’s thought that by 2030, data centres will collectively be consuming 13% of global electricity – more than some countries.

That’s why it’s vital to select a data centre or cloud provider that uses renewable energy sources as much as possible. Given that data centres

Businesses need to be thinking right down to the design of their software when it comes to delivering on sustainability
24 www.datacentrereview.com Q2 2023 GREEN IT & SUSTAINABILITY 1 2
Adam Coles, Head of DevOps and Cloud Engineering at Opencast, explores what businesses need to consider when designing and building planetfriendly green software.

are so resource-heavy, ensuring they are powered by renewable energy rather than fossil fuels will have a big impact on the overall sustainability of your software.

Use green software kits

Green software is designed in such a way that it increases energy efficiency – meaning that there are changes you can implement in the design process to make a positive change regardless of where your data is hosted.

Green software development kits (SDKs) can provide information to guide greener behaviours. For example, SDKs can provide data about the current or expected future carbon intensity of the data centre’s local grid, so tasks can be paused during carbon intensity spikes or transferred to data centres using power with lower local carbon intensity. This is especially useful for lengthy tasks, such as AI machine learning models, which can then be resumed during non-peak times to lower the carbon load.

Arming yourself with the ability to measure and then respond to the circumstances allows developers to avoid inadvertent high-energy usage and lower the carbon impact of individual jobs.

Use architectural patterns

In a serendipitous way, lower cost and energy efficiency often go hand-in-hand. This is the case with data centres, but it is also true of architectural patterns and programming languages. That’s good news for delivery and technical teams, as it can make selling energy efficiency to the c-suite much easier.

Using sustainable architectural patterns means utilising tried and tested solutions that will reduce the emissions of your software, often

with the added benefit of reducing your run costs.

Of course, balance has to be sought. Employing software frameworks and patterns saves energy, but it won’t always match your needs. Similarly, the programming languages in use can affect energy efficiency, but may not always suit your developers’ requirements. Python, one of the most popular languages, is much less efficient than Rust – though it is often considered more productive for developers.

Measure your efficiency

Whatever choices you make, it’s vital to continually monitor their impact on your energy efficiency.

Cloud providers share dashboard tools that can give you an idea of your annual carbon footprint, and resources like carbon calculator websites and individual toolkits can help to give an idea of the carbon intensity of your builds. Armed with this knowledge, developers and operations teams can see where and when it’s possible to affect their environmental impact, and work towards consistent optimisation.

Little changes for a big impact

No matter how hard we try, it won’t always be possible to deploy the most efficient cloud practice, or strip software all the way back. It’s important to keep the end-user in mind and remain aware of accessibility in all builds – without which, an unusable application consumes energy without benefit.

But, by embracing some of the principles of green software development, enterprises can begin to have a sizeable impact on their carbon emissions – benefiting their business, and the planet too.

Q2 2023 www.datacentrereview.com 25 GREEN IT & SUSTAINABILITY
3 4 5

Going minimal

ways to identify potential infrastructure savings and eliminate waste.

From both an environmental and a bottom-line perspective, data waste is staggering. Shockingly, poor data storage practices are costing the private sector close to £3.7 billion each year, with up to 70% of the data generated by organisations seemingly destined to remain unused. Causes vary from data being trapped in silos, conflicting formats, or simply serving no purpose beyond its original creation. This not only results in a squandering of valuable resources but also highlights the urgent need for better data management strategies to optimise usage and minimise waste.

One answer could be to migrate data and workloads to the public cloud, utilising the resources of hyperscalers such as AWS, Azure, and Google Cloud. Thanks to their ability to operate at scale and invest in cutting-edge data centre infrastructure, these providers are potentially better equipped to swiftly progress towards achieving net zero targets.

But, migrating data isn’t the only answer. Businesses also need to consider data minimalism when using the cloud.

Data minimalism

For businesses seeking to minimise their environmental impact and lower costs, building a strong digital presence, and cultivating digital assets is crucial.

As they become more data-driven and improve their business processes, they must create an environment that encourages the use of digital technology. Adopting a data strategy with data minimalism as its foundation is a systematic and effective way to achieve this.

The exponential increase in energy costs across the UK has brought a significant challenge for data-driven businesses in the form of mounting costs of managing operations. The truth is, data is not free, and neither is its upkeep.

By 2025 the global ‘datasphere’ is estimated to reach a staggering 180 zettabytes (according to Statista), and the cloud computing market is expected to hit $1.95 trillion by 2032. Both of these estimations lead us to one conclusion: the datasphere will continue to grow and with this, it’s likely that by 2030 we may surpass a yottabyte (equivalent to a million trillion megabytes) of data generated in a single year.

The repercussions of this data explosion are already being felt, as last summer witnessed a surge in electricity consumption, pushing data centres – which consume the energy equivalent to thousands of households – dangerously close to overloading London’s power grid and causing potential power outages in housing developments.

That is why it is more important than ever to first closely examine how to reduce the data businesses hold before moving this into potentially more sustainable cloud environments to ultimately reduce their carbon emissions.

The strategic value of data

The value of data is, however, beyond dispute. The way we generate and use data isn’t going to disappear from our lives, so businesses must find

By only storing necessary data, businesses can reduce costs and make strides towards meeting Environmental, Social Governance (ESG) targets. It’s time for businesses to prioritise sustainability and embrace the power of digital technology to build a better future for us all.

A practical approach to data minimalism

So how can businesses refine and evolve their data strategy to ensure they are practising data minimalism? Well, it starts with truly understanding the data footprint they have.

This includes identifying how much data is on-premises or in other locations, as well as determining what data is necessary to collect and how it’s being used. Having visibility across your data stack is essential for on-premises, hybrid cloud and cloud environments – helping in identifying unnecessary, inactive, and duplicated data which frees up storage space and subsequently reduces carbon emissions.

Businesses should also differentiate between mission-critical data and data that can be retained on an archive basis. Once they have this understanding, they then can make informed decisions about their data strategy.

Bringing on a trusted advisor can be helpful at this stage to provide extra support and identify ways to improve efficiency, such as consolidating storage locations.

While cloud computing isn’t a complete fix for environmental challenges, in many cases it’s still a step in the right direction. By tapping into vendor expertise, businesses can make smarter decisions about their data, cut energy costs, and reduce their carbon footprint. It is why cloud solutions offer a potentially more cost-effective and eco-friendly way forward, making sure that companies can minimise their impact on the planet while still meeting their data needs.

Businesses also need to consider data minimalism when using the cloud
GREEN IT & SUSTAINABILITY 26 www.datacentrereview.com Q2 2023
Matt Watts, Chief Technology Evangelist at NetApp, explains how cloud computing could help pave the way for the future of sustainable business.
Metrel UK Ltd. Unit 16, 1st Qtr Business Park Blenheim Road Epsom Surrey KT19 9QN Phone: 01924 245000 E-mail: info@metrel.co.uk Web: www.metrel.co.uk Twitter-@MetrelUkLtd Facebook - @MetrelUk Instagram- metrelukltd Maybe we can help you WORK SMARTER not harder. The new Ring Continuity Adapter A 2214 from Metrel will speed up your measurement of r1, rn, r2 and R1 + RN, R1 + R2 by automating the process. • Line, neutral and PE resistance all measured at a single press of a button • Automatic calculation of (r1+rn)/4 and (r1+r2)/4 for easy results evaluation • Connect once, no need to change leads until all continuity tests are complete • Accuracy, with the easy-null capability for all test lead components • Speeds up ring final circuit testing Find out how we can help, call us TODAY on 01924 245000 or mail: info@metrel.co.uk for more information. Get more from your tester

IT racks – what part do they play in your data centre?

nVent explores the crucial role played by server cabinet solutions in achieving efficient, cost effective data centres.

Want to assess how a nation’s economy is performing?

Data centre infrastructure and equipment is a leading indicator because countries that have the right infrastructure in place are ready to face the needs of the digital economy.

But what are the main criteria for data centre operators to future-proof their data centres?

Availability and security of applications rank at the very top of the list of key requirements for data centre operation. Sadly, one crucial element of IT infrastructure that’s often neglected is the 19in cabinet, despite its crucial role in protecting sensitive electronics that

process applications and data.

The IT cabinet is much more than a sheetmetal box or a couple of welded steel rods, but what are the relevant standards and aspects that need to be considered for a successful infrastructure setup?

Cabinet design

Among the sheer variety of network technology and server cabinet solutions available on the market, the 19in dimension is often the only standardised feature they all have in common. It is, in fact, the most important standardised value within the electronics industry – and more specifically,

in IT infrastructure. The 19in cabinet not only makes it easy to fit various cladding parts, it also provides a flexible canvas for creating a whole range of cabinet configurations. Key cabinet design considerations include:

• Paint finish

IT cabinets are typically black or light grey, but there is a growing trend toward white –especially in larger installations. It may be more susceptible to scratches and staining, but bright surfaces help reduce the lumen ratings required for room lighting, opening up the potential for efficiencies.

• Flammability

This requirement relates to plastic parts and

28 www.datacentrereview.com Q2 2023

gaskets whose flammability classification is tested in accordance with UL94. It requires the use of halogen-free materials that should, at minimum, meet the criteria of UL94 HB.

• Protected access

Beyond access control measures for buildings and computer rooms, the cabinet represents the last line of defence for data centre equipment. Operators must be aware of this, particularly for sensitive applications and data. For applications with more stringent security requirements, IT cabinets should be set up to accommodate a range of electronic locking systems.


The 42 U or 52 U cabinet represents the most commonly used dimensions in data centres. Cabinets measuring 47 U (2,200 mm) are be coming an increasingly popular choice because of their highly efficient use of space, but it is important to remember that installing servers in the upper rack units of cabinets on this scale may require special tools.

When it comes to width, server cabinets with a 600 mm dimension are common, as many cabinets require only a very small amount of space for cabling. 800 mm is another standard width, designed for cabinets in which more space is required for cabling – and cabinets measuring 1,000 mm wide are also becoming more and more common, especially among operators of larger data centres.

Structured cable management

A structured cable management system is another essential requirement for any cabinet accommodating reducing copper and fibre optic cables and reducing air resistance, to increase the cooling efficiency of the equipment during operation. Appropriate cable management accessories such as brackets, channels, and trays must be used to ensure that cables are properly managed and tension free. For mechanical accessories such as slide rails and shelves, operators should ensure that there are enough clearances, fastening areas, and assembly options to accommodate current distribution and environment monitoring equipment.

Air management

IT cabinets may be subject to mechanical requirements such as stability, security, and standards, but they also provide a key

opportunity for boosting air management equipment efficiency. Most common IT equipment is cooled from front to rear in a system where cool air is drawn in at the front of the cabinet and then blown out toward the rear. Heat is absorbed from the air as it flows through the equipment, keeping electronic equipment at the proper temperature that is right for operation.

Regardless of the cooling strategies selected for the equipment, the result is that a cool zone develops in the front and a warm zone at the rear. Maintaining efficient operation means preventing resistance in the airflow as well as air short-circuits, which occur when cool supply air production mixes with the warm exhaust air.

To keep air resistance to a minimum, operators must maintain as high an airflow rate as possible when using perforated doors. Cold aisle containment, a solution that is energyefficient and involves very low investment costs, has emerged as the technique of choice for both new installations and retrofitting of existing cabinet rows. Since the containment door provides access control, there is no need to install cabinet doors in the aisle. Choosing the right panels and gaskets for

a cabinet is an essential part of setting up a containment system. Frequently, quick cardboard fixes and makeshift gaskets can be replaced with the use of a flexible panel system from the start when cabinets are being specified.

Room-based cooling cabinets

Recent market studies have shown that the average density per rack has increased significantly over the last 10 years, creating new challenges. Even with the separation of cold and warm exhaust air, traditional room-based is at its cooling limit.

Rack densities of 10 kW and higher become more and more common, as cooling has become state-of-the-art. In combination with containment, density per rack can be addressed. For even more demanding requirements, the cooling moves closer to the IT assets. Rack-based rear door cooling solutions allow high densities of up to 40 kW per rack. An air-to-water heat exchanger replaces the rear door of the server rack.

HPC applications even go to the next step – direct-to-chip cooling. To support liquid flow through a cold plate positioned at the component level inside the IT assets, a manifold at the rear side of the rack is required. The manifold is connected to a CDU (Coolant Distribution Unit) – liquid-to-liquid heat exchanger – that can be rack-integrated or stand-alone.

Leading IT racks should be ready to support the new ways of cooling. Stability, fixing points as well as enough space for power distribution and data cable management are key.

Adapting to data centre requirements

As requirements have changed over time, IT cabinet design keeps evolving to accommodate them.

Pre-engineered cabinet solutions can help to reduce data centre planning efforts and shorten implementation time. Such configurations already include mandatory accessories that are even so required by the operator.

By carefully selecting a cabinet configuration that matches the desired application, they are able to achieve significant cost savings and set up an IT infrastructure suitable for housing future generations of electronic components. For more information, visit https://go.nvent.com/CP-Datacom.html

SPONSORED FEATURE Q2 2023 www.datacentrereview.com 29
The IT cabinet is much more than a sheetmetal box or a couple of welded steel rods

Driving sustainability

Data centres have become the infrastructure basis for digital transformation globally. At both ends of the spectrum –from hyperscalers with their ability to deliver enormous computing power and high efficiency, to micro data centres positioned to provide real-time applications and data access at the edge – these facilities are proliferating.

But as the world settles into the hybrid working pattern that is the legacy of the pandemic, the consequences of data centre expansion and increased dependence on data processing to fulfil both remote and on-site digital requirements are being sharply felt. According to IEA statistics, global data centre use in 2021 was 220-230 TWh, approximately 0.9-1.3% of global final electricity demand. While the global figure shows moderate growth, in countries with expanding data centre markets, the consumption of electricity in these facilities is tripling year on year.

Energy efficiency, therefore, has been rapidly pushed up the agenda at data centres, vying for top position alongside reliability, resiliency and security. To address the issue, data centres are working hard to become more energy efficient using recycled materials, reclaimed water, and, as we saw recently, even capturing their own heat to warm swimming pools. Energy sources are coming from wind, biomass, tidal, geothermal and solar to assist with cooling and heating, but there are other routes that data centres can take to save energy without compromising on performance.

Swapping out hot, power-hungry hard drives

One area that is now coming under sustainability scrutiny is the use of hard drives. Thousands and thousands of hard drives being used for storage in data centres contribute to high power consumption, primarily because they are usually in continuous operation. Many organisations are still relying on classic Hard Disk Drives (HDDs) which have been in use for decades. Advocates of this technology extended the popularity of HDDs over their more modern SSD counterparts by stressing their data capacity – and therefore price-performance ratio – but the dominance of the HDD is in decline.

There are good reasons for this; HDDs are more prone to physical wear and tear. SATA SSDs use the same interface as HDDs and offer better performance in both sequential as well as random read and write type of workloads. This makes SATA SSDs a great upgrade solution to give a boost to existing hardware/servers using HDDs.

The latest generation of SSDs using the NVMe (Non-Volatile Memory Express) communication protocol and PCIe interface bring the storage performance to another level. A single PCIe Gen3x4 NVMe SSD delivers the same sequential and random IOPS performance as five SATA SSD (such as DC500M). Therefore, a smaller quantity of NVMe SSDs is needed to deliver the same performance of multiple HDDs and SATA SSDs allowing data centres to save on the overall power consumption.

NVMe has transformed the data storage environment. Due to the massive performance gain over SATA drives, the adoption rate of NVMe technology SSDs keeps increasing year over year. The latest innovations

30 www.datacentrereview.com Q2 2023

in the data storage domain are being developed around NVMe SSDs (new PCIe interface, new form factor) and not SATA drives.

More storage, less power

For data centres, the ability to increase storage capacity whilst at the same time realising greater energy efficiency and lower latency is the ultimate sustainability goal.

There is no doubt that data centres will be looking to increase their storage estate in the future and the best option will be to consider sustainable options that provide them with more for less. It’s no longer acceptable to focus just on traditional considerations such as speed and storage capacity. Today’s data centre operator must also look closely at the environmental impact of the storage solutions they choose. The data centre industry must proactively demonstrate its commitment to reducing its considerable carbon footprint in response not just to customer demands but to progressively rigorous industry standards too.

Building in the deployment of SSD hard drives as part of an energyefficient, tiered storage approach to optimising costs and energy usage in data centres is a win-win for customers and the environment. It also enables IT leaders to satisfy growing stakeholder demands for a sustainable approach to data storage and management.

Q2 2023 www.datacentrereview.com 31
Thousands and thousands of hard drives being used for storage in data centres contribute to high power consumption, primarily because they are usually in continuous operation

Get connected

In the fast-paced world of business, companies are constantly searching for ways to gain a competitive edge. For enterprises, one of the key components of success is having a strong and reliable connection to their partners and customers across the globe. In order to achieve this, they need the support of colocation providers who can help them navigate the complex world of connectivity.

Colocation providers must ensure they are meeting the current and future needs of their enterprise customers as effectively as possible. This includes supporting them in solving larger connectivity challenges across broadly distributed geographies, increasing performance, strengthening security and resilience, reducing complexity, and increasing control of compliance within their partner ecosystem. This requires the ability to control infrastructures and data flows, and data centre operators can help them do this.

However, data centres need more than just power and space. They need to offer local access to strong network density and diversity, scalable and customisable interconnection services, and wide geographical coverage of interconnection infrastructure. Achieving all of this on their own is a daunting task for any operator. That’s where the importance of a healthy and vibrant interconnection ecosystem comes in.

Data centre operators need to ensure they are part of a larger interconnection ecosystem to meet customer demands for seamless and efficient interconnection, regardless of location. The most effective way to achieve this is through the presence of an Internet Exchange (IX) in the data centre.

COLOCATION & OUTSOURCING 32 www.datacentrereview.com Q2 2023
, CEO of DE-CIX, explains how data centre operators can create competitive advantages through interconnection services.

The virtual MMR

Imagine the data centre as the hub of a bustling metropolis. Just as a city needs roads, bridges, and transportation networks, data centres need a network infrastructure to connect partners, cloud services, resources, and applications. Network density is what makes this possible, allowing data centres to become part of a wider ecosystem instead of catering solely to their own local customers.

They can achieve this by joining an open and distributed IX ecosystem – one that is data centre and carrier neutral – where as many networks as possible converge. Here, the neutrality of the ecosystem is central, bringing with it the diversity of providers and their manifold customers. This increases the network density at the data centre, improving the latency and resilience of connections to create a competitive advantage.

Let’s take an example. A distributed IX in a busy city like New York might be accessed by well over a hundred points of presence (PoP). Because the IX is neutral and distributed, not only all the major data centre operators, but also smaller players are present on the interconnection fabric. Every single network present in a data centre connected to the fabric in this market can reach any other network connected in any other location in the market.

Distributed IXs might also exist in places like Dallas, Chicago, or Phoenix, and the combined power of all of these data centre operations can add value to customers by strengthening the density of the network. A network connected in any one of the hundreds of PoPs in North America would then be able to access and interconnect directly with thousands of local and global networks. In effect, a data centre plugs itself and its customers into the largest data centre and carrier-neutral ‘virtual MMR’ in the world.

Added value through modern interconnection services

Enterprises are also looking for interconnection services for a variety of specialised use cases. They need direct, dedicated, and multi-homed access to their resources and applications sourced from a variety of clouds, and customised private interconnection services across multiple geographical regions or continents, with enhanced security and compliance features that support their zero-trust network strategy. They also need simplicity in the booking and cancelling of interconnection services and cloud connectivity, as well as the enablement of an interconnection fabric API that they can embed in their own systems.

To provision all of this, data centre operators will need to work with an interconnection partner that can deliver customised, low-latency, and secure interconnection services, integrated as part of a large ecosystem of thousands of networks and hundreds of data centres. This ecosystem must be well distributed – as local as possible, and as global as required.

The time has come for data centre operators to take action on

interconnection and integrate the concept of an IX into their business plan. What approach they should take to do this depends on their level of in-house knowledge, their location, and their business strategy:

The DIY IX: Doing it yourself requires consistent effort and specialist interconnection knowledge. The more the ecosystem grows, the more gravity it will generate, making it more attractive for further networks to join. However, developing that initial gravity can be challenging. Furthermore, if the platform is only available in the data centre facilities, it is a closed environment that does not offer the benefits of a broader interconnection ecosystem.

Partner with an interconnection specialist: An interconnection specialist can offer the know-how and experience, as well as a readymade ecosystem and an immediate offering for data centre customers. To get the most out of interconnection ecosystem-building, an open and neutral partner is needed.

If the data centre is near an existing open and neutral IX, the best option would be to become an enabled site of that IX. Doing so embeds the data centre within this ecosystem to capitalise on its interconnection gravity. The data centre’s portfolio can thus be enriched with interconnection services, offering customers and prospects access to a diverse ecosystem of network, content, cloud, and application providers, as well as interconnection services specifically designed for the needs of enterprises.

If the data centre is operating outside a metro area, the simplest alternative would be to become a connectivity partner for an existing exchange within its closest metro area. Here, the data centre is connected to the chosen (data centre and carrier neutral) exchange, so that it can transport its customers to the interconnection platform. This brings the benefits of the existing ecosystem right to the data centre.

IX as a Service: Another alternative would be to investigate Infrastructure-as-a-Service (IaaS) solutions. Along with the advantages of having seasoned experts build and operate an IX hosted in the data centre, there are added benefits if the IX can be integrated into the IX operator’s own ecosystem.

A solution to the challenges of digital transformation

In the face of increasing demands on data centre interconnection, the only way forward is to join together with others. Whichever way a data centre operator chooses to approach interconnection, when it comes to nurturing a diverse and healthy ecosystem, it is clear that openness and neutrality enable a heightened value proposition for customers. Increased network density and customised interconnection services provide a competitive advantage and add value to data centre facilities. There is strength in numbers that vastly surpasses what each of us can achieve in isolation.

COLOCATION & OUTSOURCING Q2 2023 www.datacentrereview.com 33
The neutrality of the ecosystem is central, bringing with it the diversity of providers and their manifold customers

To the cloud and back

Infrastructure Product Management at Aptum, explores the challenges of moving from colocation to a hybrid model, and why many are considering a more holistic approach.

Organisations are increasingly spending more on cloud computing, with public cloud services being a key focus for many companies. Worldwide end-user spending on public cloud services is expected to grow 20.7% to $591.8 billion in 2023, up from $490.3 billion in 2022.

More and more businesses are moving applications from on-premises and colocation to multi and hybrid cloud solutions. There are many reasons for this: affordable security, scalability, flexibility and business agility, to name a few. Data from our 2022 Cloud Impact Study indicated that hybrid cloud dominated cloud approaches, with 86% of companies adopting or planning to adopt a hybrid or multi-cloud strategy. For context, a hybrid cloud is defined as a mix of traditional infrastructure environments with at least one cloud provider, while multi-cloud includes at least two cloud providers in the mix.

But, despite these benefits, this migration must be carried out carefully. There are certain challenges and hurdles that businesses need to consider when developing an effective cloud strategy, particularly a multi/hybrid cloud approach that spans different locations.

Changing cost landscapes

The move from traditional IT infrastructure to the cloud brings with it the possibility for cost savings as organisations make the well-documented shift in approach to IT spend from the Capital Expenditure model (CapEx) to the Operational Expenditure model (OpEx).

Companies are presented with opportunities to only pay for resources that are actively in use in the cloud rather than ‘over-purchase’ equipment in year one of a CapEx procurement cycle, ensuring that they don’t run low on resources in year five of that same cycle. This same model

presents the possibility to ‘burst’ resources at times of peak demand, allowing organisations to ensure that resource constraints don’t adversely impact the user experience of their customers.

Poorly planned migrations

With this changing cloud landscape and the shift from CapEx to OpEx, we have seen many businesses make the mistake of attempting a simple ‘lift and shift’ approach to migrations – something that is often not-sosimple. This involves businesses replicating their legacy infrastructure, like-for-like in the public cloud. This is often where many businesses trip up by not using the cloud to its full potential.

The applications that migrate to the cloud need to be scalable for a start. This usually requires re-architecting them, planning deployment and making sure applications are cloud-ready to be scaled effectively. The main problem with this is that this takes time and skill to do so correctly but can also be costly further down the line if not done properly.

Increasingly, organisations are realising that by doing this lift and shift to the public cloud, they haven’t achieved the results they wanted. The promised cost savings often fail to materialise and organisations are left disillusioned by their public cloud experience. This problem is largely due to the fact that they haven’t re-architected their ecosystems effectively. This frequently results in non-elastic solutions or the realisation that there is a more appropriate approach that could be used in their deployments.

For example, organisations often reflect on decisions, such as storage location choices, as they realise that the incorrect storage tier or location means that they are unable to meet SLAs or data residency requirements. This can result in a retrospective re-architecting of an organisation’s environment, which often reveals that a hybrid approach to service

34 www.datacentrereview.com Q2 2023

delivery is more appropriate. Concerns with integrating cloud and legacy applications were evident in our 2022 study as 44% of respondents worried about carrying this out properly.

Specialised skills gap

In the UK, there is a large digital skills shortage at the moment with only 11% of UK workers possessing advanced digital skills. In 2022, seven in 10 businesses said they wanted to accelerate their cloud adoption but lacked the internal expertise to do so.

The problem is that many infrastructure engineers are very familiar with and skilled at working in colocation but haven’t yet been upskilled to manage hybrid cloud. If businesses don’t have the right talent internally, it is very difficult to manage this migration to a hybrid cloud correctly and effectively. This is why organisations tend to turn towards and lean on the help and support of third-party providers such as MSPs and Managed Hosting Providers. They offer the skills and talent that internally a business lacks and can help bridge the gap between colocation and cloud.

Hybrid: the best of both worlds?

Businesses that opted for the lift and shift approach a few years back are frequently realising they haven’t made the cost savings they expected from their move to OpEx. Their decision to put off the re-engineering of software systems during their cloud migrations, which at the time is often seen as a cost saving, ultimately means that legacy systems are shoehorned over to the public cloud when they’re really not designed to work effectively on public cloud infrastructure.

Although this does perhaps extend the life of legacy systems, it doesn’t fundamentally deliver any improvement in customer experience – indeed

the users sometimes encounter an inferior experience. When reviewing their software systems, companies often decide to dovetail the application modernisation with the introduction of a cloud-based ERP or CRM system to ensure both effective cloud utilisation and delivery of an up-todate software system and experience for their users.

The flexibility and scalability that hybrid cloud offers is one of the main reasons that enterprises are moving towards hybrid cloud environments. The ability to choose where to host each part of your applications in order to deliver optimum performance and meet specific data sovereignty and security requirements, all while carrying this out in the most cost-effective way, is a compelling case for what hybrid cloud can offer. Application modernisation also affords organisations the opportunity to re-evaluate market offerings, sometimes providing the option of replacing part of their infrastructure with a SaaS solution and removing the requirement for infrastructure.

Which way is right?

The transition from colocation to hybrid that we are seeing many businesses carrying out should by no means be seen as an ‘easy’ method. Hybrid cloud has many benefits, but these can only be realised if carried out correctly and with forethought and planning. There are also of course hurdles to hybrid cloud such as scaling up and re-architecting programmes, which can be costly — particularly if done incorrectly. A lack of internal specialised skills is the main cause of this and is a big challenge for many businesses.

But the transition doesn’t have to be so taxing on businesses. Employing the skills and support of MSPs will help enterprises to make the most of the cloud in terms of data residency, cost and flexibility.

Q2 2023 www.datacentrereview.com 35
Hybrid cloud has many benefits, but these can only be realised if carried out correctly and with forethought and planning

Industry Insight: Out in the open

Digital technologies are playing an increasingly important role in modern life, and data centres play no small part in this. But while demand is growing, so too will data centres’ global carbon footprint, if these organisations do not make a concerted effort to limit their emissions.

According to the International Energy Agency, data centres and data transmission networks account for nearly 1% of energy-related Greenhouse Gas (GHG) emissions. And while this may not seem like an enormous amount, these emissions will need to be halved by 2030 in order to meet net zero goals.

So it’s no surprise that, as the Uptime Institute notes in its latest global survey, data centre operators are under increasing pressure as they face heightened scrutiny, new regulations and further reporting requirements when it comes to sustainability.

Despite this, there are currently no standardised reporting requirements in place, which can lead to a lack of accountability, greenwashing and missed opportunities to improve efficiency.

The first step then, must be to encourage transparency in data centre sustainability reporting.

Current state of sustainability reporting

The data centre industry as a whole has a carbon footprint larger than the aviation industry – and yet it is only recently that sustainability has become a focus, in part driven by customer demand.

Although there is a desire in the global data centre industry for accurate and consistent sustainability reporting, any consensus still seems some way off. At present, there is a complete lack of formal regulation in terms of measurement, metrics, and indeed reporting – leading many environmentalists to accuse the industry of simply greenwashing.

Organisations that undertake specific measures such as carbon offsetting schemes may appear sustainable – but this is not necessarily the case.

Measuring energy efficiency and emissions

There are a variety of metrics data centre operators can use to measure their efficiency and, in theory, sustainability. Most common are PUE (power usage effectiveness), WUE (water usage effectiveness), and CUE (carbon usage effectiveness), as well as tracking Scope 1, 2 and 3 emissions.

All of these approaches have their benefits, especially when taken together. Measuring PUE, for example, is useful for benchmarking data centre efficiency. Organisations can use this metric as a baseline to measure their data centre efficiency, and once again to measure the effect of any changes made. This can help reduce power consumption and energy costs overall. However, operators need to realise that focusing only on PUE without thinking about the underlying efficiency of the computing itself can lead to wasteful computing even though the PUE is seen to be improving.

Moreover, by using WUE in conjunction with PUE and CUE metrics, data centres can reduce both energy use and the amount of water and electrical power needed to run the facility efficiently.

The three scopes of emissions established by the Greenhouse Gas Protocol, meanwhile, are a way of categorising the different kinds of emissions an organisation creates both in its own operations and as

36 www.datacentrereview.com Q2 2023 INDUSTRY INSIGHT
Tate Cantrell, CTO at Verne Global, explains the importance of transparent reporting for data centre sustainability.

a result of its wider supply chain. In short, Scopes 1 and 2 are those emissions that are owned or controlled by the organisation, whilst Scope 3 emissions are a consequence of the activities of the company but occur from external sources not owned or directly controlled by it.

Data centres that measure and publish all of the above are already well on their way to creating a more transparent and sustainable industry –but more still needs to be done.

The need for greater transparency

Greater transparency in reporting will help everyone – customers, investors, and regulators alike – to evaluate how sustainable data centres actually are. In the past, things like Power Purchase Agreements (PPAs) may have been enough to satisfy end-users, but now, with the current climate crisis we face, it’s vital that the data centre industry does more.

Some larger data centre users and operators are already starting to make a change. Google, for example, aims to reduce 50% of its combined Scope 1, Scope 2, and Scope 3 absolute emissions before 2030. However, the industry still has a long way to go. For example, Google’s reporting obscures the fact its Scope 1 and 2 emissions have increased in the last five years by 152%.

Greater transparency in data centre sustainability reporting would lead to several benefits both in the short and long term. As a start, it provides a source of accountability. As there are currently no reporting standards in place, data centres cannot be held accountable for their energy usage, emissions, or overall environmental impact. Ultimately, this leads only to further environmental harm.

The main issue is that data centres may be reporting and publishing information that is incomplete, misleading, or even simply untrue. This inaccurate sustainability reporting makes it difficult for customers to make informed decisions – so the whole industry ends up being less sustainable, even when there are genuinely sustainable data centres in the mix. What’s more, the lack of standards and requirements for reporting negatively impacts data centres themselves. Without any set approach

to follow, data centres can easily miss vital information which may otherwise help them reduce their carbon emissions, and improve their energy efficiency, for example.

This lack of transparency, as well as the lack of any standardisation around how and how often the data centre industry conducts its reporting, ultimately means that many organisations’ ‘green’ initiatives amount to little more than greenwashing.

The journey to true sustainability

So, how can we reverse this trend? For a start, there are several helpful approaches emerging in the sustainability space – such as decarbonisation. Many PPAs and other offsetting projects are opaque and do not provide a clear perspective of where the energy is coming from and how green it is. Instead, matching electricity consumption with carbon-free energy generation from resources on the same local and regional grids at every hour of the day can result in a fully decarbonised electricity system.

At the same time, it’s important to acknowledge the barriers that many data centre operators face that may limit their journey to sustainability. For example, it is very difficult to source 100% renewable energy, 24 hours a day. A more effective approach is to put energy-intensive operations in locations that can accommodate 100% renewable energy. In fact, for many customers, 10% or less of their applications that are truly latency-sensitive need to be close to business operations. The other 90% can be housed anywhere. So optimised locations like Iceland and other low carbon intensity Nordic locations should be the obvious choice.

Greenwashing – whether deliberate or not – makes it difficult for consumers and businesses to make informed decisions, and ultimately puts both data centres’ reputations and the environment at risk.

Starting with transparent reporting, the data centre industry must make the changes needed to genuinely improve and implement sustainable, energy efficient practices worldwide – ensuring supply is able to keep up with increasing demand without endangering the planet.

Q2 2023 www.datacentrereview.com 37 INDUSTRY INSIGHT
There are currently no standardised reporting requirements in place, which can lead to a lack of accountability, greenwashing and missed opportunities to improve efficiency

Industry expert takes the helm at Leoch Battery

Battery manufacturer Leoch Battery has appointed Mike Dunckley as its new president for Europe, Middle East and Africa.

Dunckley, one of the founding members of the Hawker Batteries Group, has more than 30 years of experience in the battery industry and an impressive knowledge of the sector, from lead to lithium. Leoch Battery, which recently also announced it had become the 122nd member of the Consortium for Battery Innovation, is now planning to build a new manufacturing plant in Mexico, to assist in boosting its presence in North and South America.

Sales in the Americas have increased by half a billion in the last 12 months.

Leoch Battery operates its corporate headquarters in China but has expanded worldwide with its 16 subsidiary offices globally, including locations in Singapore, the US and the UK.

The expansion into Mexico further supports the company’s desire to continue to dominate both new and existing markets.

ABB introduces CogniEN

CogniEN from ABB Electrification will allow operators to pinpoint problems at their electrical installations with greater accuracy, deploy targeted maintenance teams faster and better understand how their facilities are performing, 24-7, from anywhere in the world. CogniEN is a product-vendor-agnostic cloud solution for cognitive electrical networks. Not only can the Amazon Web Services-hosted system remotely monitor the health of ABB electrical devices powering infrastructures but it can also retrieve data from any third-party electrical device and upload it to the cloud in real-time.

Highly flexible, the system can be customised depending on how much data the operator requires. The most basic subscription offers access to the raw data alone, while a more advanced setup will send an alarm if an asset is about to fail, allowing maintenance engineers to address the problem before it impacts the facility’s operation.

Schneider introduces Easy UPS 3-Phase Modular

Schneider Electric has introduced the Easy UPS 3-Phase Modular, designed to protect critical loads while offering third-party verified Live Swap functionality.

Easy UPS 3-Phase Modular is available in 50-250 kW capacity with N+1 scalable configuration, and supports the EcoStruxure architecture, which offers remote monitoring services.

Easy UPS 3-Phase Modular enables customers to lower their capital expenditures through an optimised capex model. In addition, scheduled downtime is reduced through self-diagnosing third-party certified Live Swappable power modules and static switch, thereby increasing reliability and availability in a compact footprint.

Easy UPS 3-Phase Modular is designed to be easy to select, configure, install, and maintain, making the deployment process seamless. Easy UPS 3-Phase Modular is part of Schneider Electric’s Easy UPS 3-phase product portfolio, which focuses on core features to meet the needs of customers at a value price point.

Schneider Electric • www.se.com/us/en/

Leoch Battery UK • www.leochbattery.co.uk • 01858 433 330
38 www.datacentrereview.com Q2 2023

Contact us

Maximizing performance & efficiency in Data Centres

From comms rooms to colocation & hyperscale

FläktGroup deliver engineered solutions for Indoor Air Quality & Critical Ventilation Solutions around the world

Our DENCO range has been leading the Data Centre cooling industry for over 4 decades

1.1 Gigawatts of Cooling

Supplied to the world's Data Centres in 2022


Our DENCO product range, an exceptional line of precision cooling systems tailored to your needs. All products are designed for optimal performance and minimal energy consumption and water usage

Data Centre Cooling

Our comprehensive product portfolio includes CRAC, CRAH, Fanwall, Free-cooling Chillers, Adiabatic Dry Coolers, and more. At FläktGroup, our goal is to support your data centre with tailored, end-to-end solutions that maximize uptime, enhance sustainability, and optimize operational efficiency.

Cooling Solutions up to 2 Megawatts

Ultra-DENCO® Hydro-DENCO®

Just say where you want it...

We’ll take care of the rest.

EcoStruxure™ Micro data centres from Schneider Electric™ bring together power, cooling, physical security, and management software and services into pre-packaged rack solutions that can be deployed globally in any environment.

• Allows for rapid IT deployment wherever and whenever it is needed in weeks, not months.

• Reduce service visits and downtime.

• Securely manage system from anywhere.

Explore EcoStruxureTM Micro Data Centre from Schneider Electric


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.