DCR Q4 2021

Page 1

Q4 2021

Green IT & Sustainability Things are changing – for the better

14

20

www.datacentrereview.com

Edge Computing

Final Say

Lithium-ion to the rescue?

Let’s get flexible

38


Info.uk@socomec.com


News 04 • Editor’s Comment So long, and thanks for all the fish

Contents

06 • News Latest news from the sector.

Features 10 • Artificial Intelligence & Machine Learning Airedale International’s Jon Martinez explores three ways in which AI can help data centres reach their efficiency and sustainability goals.

14 • Green IT & Sustainability Russ Barker at Vertiv discusses how the data centre sustainability narrative is changing.

20 • Edge Computing Chris Cutler of Riello UPS explores whether lithium-ion batteries are the ideal energy storage solution to power the next generation of edge computing applications.

10

24 • Data Centre Design

06

Louis McGarry at CENTIEL UK explores how modular whole concept data centres could help reduce total cost of ownership.

Regulars 31 • Industry Insight How can you achieve near-zero downtime migration for SAP systems? Eamonn O’Neill at Lemongrass Consulting explains.

37 • Products

22

Innovations worth watching.

38 • Final Say In a world where micro data centres need to be resilient, agile and often, hybrid, a lack of flexibility is causing disruption at the edge, says Andy Connor at Subzero Engineering.

38 24

16


Editor’s Comment So long, and thanks for all the fish In case you missed the news earlier this month, this issue will be the last to involve Claire Fletcher, who has moved on to pastures new and the next exciting chapter of her life. Claire steered DCR through the good, the bad – and occasionally bonkers – times of the last few years and we’ll all miss her frank and sometimes cynical take on the world. A well-earned thank you goes out to Claire and we wish her all the success for the future. But she’s still here in spirit, at least for this issue, having brought it almost to the point of publication – so the Q4 DCR serves as a fond farewell. Looking to the future, I’ll be taking up the mantle as editor of DCR. I’d like to really get to know you, our readers, and find out what news and views you’re most interested in reading, to make sure we’re covering the most important trends, topics and concerns facing the industry. So keep an eye open for ways in which we’ll be reaching out to our readership to get your opinions. In the meantime, please feel free to drop me an email at kayleigh@datacentrereview.com, and don’t forget to follow us on Twitter @dcrmagazine. Kayleigh Hutchins, Editor

EDITOR

Kayleigh Hutchins kayleigh@datacentrereview.com

CONTRIBUTING EDITOR

Jordan O’Brien jordano@sjpbusinessmedia.com

DESIGN & PRODUCTION

Alex Gold alexg@sjpbusinessmedia.com

GROUP ACCOUNT DIRECTOR

Sunny Nehru +44 (0) 207 062 2539 sunnyn@sjpbusinessmedia.com

ACCOUNT MANAGER

Kelly Baker +44 (0)207 0622534 kellyb@datacentrereview.com

PUBLISHER

Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: subscriptions@electricalreview.co.uk SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register

Electrical Review is published by

2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media

Average net circulation Jan-Dec 2018 6,501

Follow us on Twitter @DCRmagazine

Join us on LinkedIn

4 www.datacentrereview.com Q4 2021



EDIT

News The latest highlights from all corners of the tech industry.

EirGrid: data centres could account for 25% of power demand by 2030

In its latest Generation Capacity Statement, EirGrid – Ireland’s national electricity grid operator – has predicted that data centres could account for a quarter of electricity usage in the country by 2030. It forecasts “challenges” in meeting the demand for energy over the next several years. Minister for Communications, Energy and Natural Resources Eamon Ryan said the electricity supply in Ireland will be “tight” in the foreseeable future, and that no industry should be exempt from endangering Ireland’s climate targets – including data centres.

LONDON & DUBLIN TOP EMEA DATA CENTRE GROWTH Google Cloud‘s Grace Hopper cable lands in Cornwall oogle Cloud has successfully landed the Grace Hopper subsea cable in Bude, Cornwall. The 16-fibre pair cable now runs between New York in the US, Cornwall in the UK and Bilbao in Spain. Google claims that the cable will funnel between 340 and 350TB of data per second, which, according to the company, is the equivalent of 17.5 million people simultaneously streaming 4K videos. The cable will make use of ‘fibre switching’ to provide a more reliable service by enabling traffic to keep moving around outages. Google claims it will offer a level of resilience against unforeseen failures and help meet the increasing demand for high-bandwidth connectivity and services. In a blog post, Jayne Stowell, Strategic Negotiator, Global Infrastructure at Google Cloud, said, “We know that technology is only becoming more important for the UK economy. The amount technology contributes to the UK economy has grown on average by 7% year on year since 2016. And UK-based venture capital investment is ranked third in the world, reaching a record high of $15 billion in 2020, despite the challenging conditions from the Covid-19 pandemic. “What’s more, 10% of all current UK job vacancies are in tech roles, and the number of people employed in the tech sector has grown 40% in two years. With this in mind, improving the diversity and resilience of Google’s network is crucial to our ability to continue supporting one of the UK’s most vital sectors, as well as its long-term economic success.”

G

6 www.datacentrereview.com Q4 2021

A new report has placed Dublin and London as the leading cities for the development of new data centres in EMEA during Q2 2021. The Data Centre Report, published by DC Byte in partnership with property consultancy Knight Frank, indicates that growth has been strong across EMEA in 2021. In Dublin, 146 MW of capacity was added in Q2 of this year, with London’s growth reaching 122 MW, driven by cloud demand from hyperscalers. Dublin’s expansion almost matches its entire total for 2020, while London’s figures are on pace with its second quarter of that year.



NEWS

AWS RE/START PARTNERS WITH UK RECRUITER TO ADDRESS IT SKILLS GAP

AWS re/Start has teamed up with UKbased recruitment firm Robert Half to help graduates find entry-level cloud roles in the UK. AWS re/Start offers a free 12-week skills development programme in an effort to support unemployed and underemployed individuals gain entry-level cloud positions, while also helping to address the UK’s tech skills shortage. Robert Half will be supporting AWS re/Start by providing UK graduates with further tech training via its e-learning facility, as well as instruction on general employability and soft skills that will ease them into the working environment.

Schneider & Uni of Birmingham Dubai build green data centre on-campus

A

new on-site, energy efficient data centre facility is now operational on the campus of the University of Birmingham Dubai. The data centre has been designed to maximise energy efficiency, offering energy savings of up to 15% while operating at peak performance, according to Schneider Electric. The project has been developed as a ‘Smart Campus’, and will offer students and staff the benefits of state-of-the art technology – including WiFi6 – to support multidisciplinary learning across the campus and between the

UAE and the UK. The data centre is built around Schneider Electric’s technology, implemented by system integrator CDW, and will offer features such as remote management and performance benchmarking. Schneider Electric’s EcoStruxure IT software will enable on-site and remote monitoring of energy use across servers, cooling and electrical equipment as well as power management for connected loads, and will provide in-built cybersecurity to run vulnerability assessments on all devices within the data centre.

Vertiv joins Sustainable Digital Infrastructure Alliance Vertiv has joined the Sustainable Digital Infrastructure Alliance (SDIA) as a lead sponsor. The partnership will initially focus on Europe, where Vertiv will support the SDIA in connecting members of the critical infrastructure industry to reach sustainability goals. One of Vertiv’s first priorities in the SDIA will centre around grid-interactive technologies, supporting the integration of critical infrastructure with energy grids. Established in 2019, the SDIA has set out a ‘Roadmap to Sustainable Digital Infrastructure’ by 2030. It brings together a network of organisations across the globe with the aim of achieving a sustainable digital economy. Vertiv’s SDIA membership follows the company’s participation in the European Data Centre Association (EUDCA), which it joined in 2018. Through this partnership, Vertiv is contributing to the development of the Climate Neutral Data Centre Pact, an initiative that sets targets for the cloud and data centre industry to meet the European Commission’s goal of climate-neutral data centres by 2030, as well as the European Green Deal’s aim to make Europe climate-neutral by 2050. 8 www.datacentrereview.com Q4 2021



ARTIFICIAL INTELLIGENCE

Embracing AI opportunities Jon Martinez, Commercial Controls Manager at Airedale International, explores three ways in which AI can help data centres reach their efficiency and sustainability goals.

10 www.datacentrereview.com Q4 2021

T

here is no doubt we are in a new phase of the digital revolution that has encompassed our world for the past 20 years or so. Things have progressed rapidly since the widespread adoption of the internet, and we see no signs of this abating as we emerge from the intensity of the pandemic. The increased demand for digital data has forced the speed of technological advancements at a pace beyond that ever predicted. Remote connectivity over 4G and eventually 5G cellular networks means that it’s now really easy to connect a sensor, device, machine, or even an entire factory to the cloud and utilise the effectively infinite processing power to perform analysis on the asset. With something like 40 billion devices across the globe connected to the ‘cloud’, which is expected to grow exponentially over the next few years, we are seeing huge growth in the data centre space. However, whilst some data centre operators have raced to embrace new systems and services, many feel over-saturated by the constant flow of new tech terms and are left wondering if AI, cloud or the Internet of Things is actually a thing, or just unsubstantiated sales talk.


ARTIFICIAL INTELLIGENCE

The truth, as ever, is somewhere in the middle. As new ideas come to the forefront there is usually a lot of noise to cut through to get to the substance and understand how these things can actually work in a real-life workplace scenario. With some trepidation towards getting cloud connected from data centre providers, one cannot help but notice the irony that these security conscious data providers are reluctant to use their own service. Detailed below are three very tangible financial and environmental benefits that AI can offer to data centres. These three things are by no means the limit, but go some way to demonstrate how connecting to the cloud can work to improve efficiencies, helping to drive forward sustainable performance whilst driving down energy expenditure and ultimately, cost. 1. Performance analysis Stopping to review processes and procedures is timely, costly, and stops us doing our day job, and is so often left to a breakdown to force the situation. The benefit of AI is that it does the review for you. When a piece of equipment or a process chain is connected to the internet, it is constantly feeding in performance data at speeds incomputable to humans. Furthermore, that data is instantly translated against not only its own environmental settings but also those of other machines in other settings, which allows it to assess its own performance against a much wider benchmark. It can automatically recognise if a setting needs to be changed to improve efficiency, for example, and make that change, and report that change back to the operator, all within a split second. IoT connectivity means that the data supplied is measured against that of many other similar operations across the world. By expanding the pool of knowledge it feeds into, it can extract so much more data. Take, for example, something as simple as an electric toothbrush that has machine learning technology built into it. This toothbrush analyses the brushing patterns of the user and highlights areas where they are potentially missing, giving useful information back to the user, allowing them to become more efficient at brushing their own teeth. However, because of IoT, this data is all sent to the cloud and stored, along with the usage data for say 50,000 other users of the brush. The benefit of this is that the manufacturer can then further detect if the majority of these users are all missing the same spots, and this would provide a good indication of where the brush could be improved in future product development, meaning more people can benefit from an improved product. This is a simple example of a low value item, but it works just the same on high value equipment, such as lifts, engines, locomotives and data centre cooling technology. 2. Predictive maintenance High value assets can cost companies thousands of pounds if they break down or operate inefficiently. By connecting kit to the internet and allowing constant real-time monitoring and analysis, the data can be instantly interpreted, so if a drop in performance against operating conditions is detected, this will act as an early warning system for the maintenance team to investigate further. Early intervention can prevent prolonged periods of higher energy consumption and eventual breakdown, and all the costs associated with that. An effective AI system will judge performance on a variety of factors – for a refrigerant-based cooling system, for example, superheat, subcool,

suction, head pressures, water flow and/or airflow are all analysed for deviations against ‘normalised’ behaviour, however, the overall output of the unit such as cooling duty/capacity, efficiency, power consumption against its operating conditions are all taken into account instantaneously and over time as well. One key area of concern for refrigerant-based cooling systems is leakage of refrigerant into the atmosphere. Often these leakages aren’t detected until too late. AI has the ability to catch such leakages earlier by detecting small losses in performance or changes in the power profile. The benefits of predictive maintenance are huge in terms of environmental impact, operational impact and overall cost savings to the end-user.

IoT connectivity means that the data supplied is measured against that of many other similar operations across the world 3. PUE optimisation The application of an AI system could be used to achieve improved PUE of data centre environments, as demonstrated in a recent study conducted by a well known internet organisation. The AI worked by analysing all aspects of the data centre control system and could then determine where improvements could be autonomously made in order to increase system efficiency. The big advantage of using AI for this purpose is that it can crunch enormous amounts of data quickly and generate recommendations fast enough for them to be meaningfully applied. Changes to things such as temperature, humidity, airflow and pressure adjustments in response to changing ambient temperatures and data centre load consistently improved the system efficiency by around 30-40%, which represented a huge cost saving in energy for this and most other typical data centres. So why the reluctance? Data centres are traditionally adverse to any outside connectivity due to security reasons. Whilst the AI calculations and data storage can be run locally on-site as an isolated installation, the big disadvantage would be that data centres aren’t benefiting from the central hub of data that could be stored in the cloud, allowing them to learn from other connected data centres across the globe. Connecting devices like this and analysing the data opens up a whole world of potential business value and competitive advantage. It creates the ability to monitor products at the most important stage of their lifecycle, which is when it’s sat in the customer’s hands. Providing an unambiguous source of business intelligence in a wide variety of areas – from product performance and ongoing maintenance to product development to customer experience – AI offers benefits to both client and supplier. As with anything new, we know there will be resistance, but we hope that if the benefit is shown to be big enough, then barriers can be overcome.

Q4 2021 www.datacentrereview.com 11


ARTIFICIAL INTELLIGENCE

Up to the task Data centre infrastructure management (DCIM) software is fast-evolving to meet the demands of a digital world. Marc Garner, VP, Secure Power Division at Schneider Electric UK&I, discusses how DCIM, AI and data analytics can improve security, resilience and sustainability in data centres. n the early days of cloud computing, servers were placed in large centralised data centres – but today, the digital ecosystem has changed. Hybrid IT systems leveraging hyperscale facilities, smaller regional data centres and edge computing systems have fast become the vehicle for digital transformation, meaning the issues facing data centre operators have changed with them. Where once resilience was the main concern, now the issues of security and sustainability are of equal importance. It is only natural, therefore, that in an industry so dependent on innovation, change is a constant occurrence. As such, the software tools that help data centre owners to manage their operations have had to adapt in order to reflect these new realities. Traditionally, the software tools that monitored IT assets on a network were typically hosted on premises. However, in today’s hybrid world, where operators typically manage a portfolio of facilities, there are new problems that require greater use of data analytics, machine learning and software designed for anywhere visibility. For example, the further to the edge of the network a data centre is to be found, the less likely it is that technical maintenance personnel will be based permanently on-site. The software systems managing such distributed infrastructure have, therefore, had to

I

12 www.datacentrereview.com Q4 2021

evolve, and cloud computing has enabled DCIM to be redesigned to offer end-users more control, greater automation and increased protection from downtime. For the industry itself, resilience remains a key priority, and protecting data centres against both human error and unwanted intrusion is of mounting concern. In 2013, a survey from Allianz detailing the issues most concerning data centre management placed cybersecurity as 15th in order of priority. By 2020 it had risen to be the number one concern. Emerging threats such as ransomware attacks increase the demand for management tools capable of repelling them, meaning DCIM has fast evolved to include cybersecurity features. Sustainability too is rapidly becoming a key concern of senior decision makers, and according to a report in the Harvard Business Review, 99% of C-level professionals agree that sustainability issues are important to the future success of their business. Further, the Energy and Climate Intelligence Unit reports that 49% of the world’s annual GDP is now covered by nations, regions and cities that are legislating for net zero emissions. Whether for reasons of government legislation or corporate social responsibility (CSR), sustainability is an issue that has become a key part of decision making.


ARTIFICIAL INTELLIGENCE

The evolution of DCIM To address the challenges in today’s digital world, DCIM software platforms have evolved to include five new attributes. Firstly, they exploit cloud technologies for scalability. Many data centres at the edge, for example, lack permanent IT personnel on-site. Such facilities must be monitored remotely, using IoT-enabled assets to communicate via the cloud. Here, cloud-based DCIM becomes essential to gain a real-time picture of IT assets in distributed locations and maintain uptime. Secondly, they must connect to a data lake in order to use artificial intelligence (AI) and deliver in-depth insights. If properly assembled and categorised, the sheer volume of information that can be collected from a data centre is a valuable resource, and one that can be mined using machine learning. This capability allows users to gain deeper insights into how individual components are functioning and offers a more granular view of how a data centre, or group of data centres, operate. Thirdly, they use mobile and web technologies, and integrate with third party platforms via APIs. No single tool can manage every task so integration with other platforms is key, be they legacy systems, or new applications. Being able to communicate seamlessly with other applications across the ecosystem helps users retain control of all mission-critical sites and future-proof their systems. Fourthly, they optimise user experience for simplicity in managing distributed IT. The ability to provide clear and easily accessible information is crucial to gain the most benefit from DCIM software. Fifth and finally, they serve as a compliance tool to identify and eliminate risk. With increasing attention being paid to data centre resilience, operators must prove that their organisations meet modern guidelines and regulations. The data and insights that can be obtained from modern DCIM software provides verification that the highest standards of compliance are being met.

Resilience remains a key priority, and protecting data centres against both human error and unwanted intrusion is of mounting concern Increasing resilience In many respects, new DCIM tools enable greater resilience by first identifying all devices on an organisation’s network. As recently as 2017, a survey by ForeScout found that 82% of companies were unable to identify all devices connected to their network. Frankly, if businesses are unaware of what items are connected, they cannot gauge their vulnerability to malfunction or security breaches. Once all devices have been identified in an inventory, the insights gleaned from a data lake and AI tools can drive remedial actions. This might include finding the age of the batteries within uninterruptible power supplies (UPS), or determining how their operating lives could be affected by environmental issues such as excessive heat. Such insights allow operators to plan for maintenance, or schedule a replacement before

malfunction occurs, enabling greater protection from failures. Identifying all devices is also an essential first step in securing a data centre against attack. Schneider Electric estimates that 70% of devices that exhibit one vulnerability, such as outdated firmware, also have several other weaknesses, such as not using secure communications protocols (eg HTTPS), or having unguarded SNMP access. Further, Gartner predicts that by 2022, 70% of organisations that do not have a firmware upgrade plan in place will be breached due to a firmware vulnerability. Once such vulnerabilities are detected, they can be corrected. Finally, integrating with other applications via APIs allows you to partner with other service organisations and use their own software to extend the reach of maintenance personnel. Thereby protecting remote sites from downtime. Improving sustainability The move to hybrid and distributed IT environments makes the calculation of environmental impact all the more difficult. However, a next-gen DCIM tool can help reduce the complexity by using the data gathered and analysed to deliver insights into the overall energy consumption. Further, it can offer a detailed analysis of the carbon footprint of multiple small sites, meaning that managing energy demands at the edge becomes more simplified, allowing operators to plan for initiatives that reduce carbon emissions and deliver more sustainable data centres. As we look forward, DCIM, AI and real-time analytics will play an important role in addressing sustainability, resilience and security issues. Especially at the edge.

Q4 2021 www.datacentrereview.com 13


GREEN IT & SUSTAINABILITY

It’s not easy being green The data centre sustainability narrative is changing, says Russ Barker, Key Account Sales Director at Vertiv.

ata centres provide the foundations of today’s digital world. From healthcare and educational institutions to financial services and ecommerce, the last 18 months has seen unprecedented activity in cloud migration and digital adoption. As a result, and as evidenced by a recent report by Technavio, the data centre market in Europe will expand at a compound annual growth rate of 20% by 2025. The challenge is the expansion of data centres comes at a time when the need for green credentials is high on the public and business agenda. This, coupled with the fact that data centres consume 3% of the world’s energy, is leading to them developing a power-hungry reputation. Having earned a utility-like status over the pandemic, the data centre industry is under more pressure to take meaningful steps towards reducing carbon emissions. Against this backdrop of expansion and acceleration, let’s look at how operators are challenging the current narrative surrounding data centres and sustainability.

D

Setting the green standard One of the largest data centre operators in the Nordics, Green Mountain, exemplifies what it means to be committed to the environment with its DC1-Stavanger facility, located inside a mountain. More than skin-deep, the Norwegian-based colocation data centre has a green exterior that extends to its critical infrastructure. The former high-security NATO ammunition storage facility turned data centre runs entirely on hydropower – and looks like a location from a Bond film. Cooling a facility typically accounts for 40-80% of the electricity required to power servers. Here, Green Mountain uses its natural surroundings to its advantage. The 22,600 sqm facility is cooled by a fjord and nearby rivers with a continuous water temperature of 8°C. The cooling solution is one of the most efficient in the world. It uses less than 3 kW of power to gain over 1,000

14 www.datacentrereview.com Q4 2021

kW of cooling, delivering a significant cost reduction to its clients while supporting targets for energy efficiency. Even the waste heat produced by the data centre is used, with Green Mountain entering into an agreement with Norwegian Lobster Farm to create the world’s first land-based lobster farm – representing an innovative example of a circular economy. And one that delivers a significant carbon footprint reduction. Committed to making environmentally-friendly decisions for the benefit of its customers and community, the facility operators prioritised product and supplier sustainability credentials when procuring a cooling system for a data centre extension. The extension included the installation of chilled water perimeter units in its new data centre space, giving the DC1 facility 5 MW of additional cooling capacity. At its DC3-Oslo site, a monolithic, transformer-free uninterruptible power supply system (UPS) has been deployed. With operating efficiency up to 99% and power density at 1,200 kilovolt-amperes (kVA), Green Mountain was able to achieve the smallest footprint and the highest efficiency available in the market at this power rating. Alexander de Flon Ronning, Design and Product Manager, Green Mountain, anticipates the colocation’s overall power usage effectiveness (PUE), which is already extremely efficient, will further improve. Evolving the sustainability narrative But what if your data centre isn’t located in a mountain by a fjord? How else are data centre operators progressing the roadmap towards sustainability and adopting innovative technologies to monitor and limit their carbon footprint? Here, IT giants such as Microsoft and Amazon are leading from the front. For example, Microsoft, to meet increasing demand for cloud services in Sweden, has announced plans to open sustainable data centres in the region. Powered by IoT technology that monitors energy consumption for renewable energy matching, Microsoft will ensure its facilities operate on 100% renewable energy. What’s more, Microsoft has laid out its ambitions of becoming carbon negative by 2030, and to remove all the carbon it has emitted since it was founded by 2050. It’s put aside a $1 billion investment fund for this purpose. The largest cloud provider, Amazon, has also made gains in its sustainability strategy, co-founding The Climate Pledge, which promises to achieve net zero carbon emissions by 2040. It too is continuing to scale up its investment in renewable energy projects, recently confirming nine renewable energy projects across Europe and North America. Already in motion, Amazon’s wind farm projects in Ireland are forecast to add 229 MW of renewable energy to the grid each year. This will reduce carbon emissions by 366,000 tonnes of CO2 annually, producing enough renewable energy to power 185,000 Irish homes each year.


GREEN IT & SUSTAINABILITY

Decarbonisation and demand-side services To play their part in decarbonisation, in line with the Paris Agreement, data centre operators must continue to advance sustainable strategies to feed the growing appetite for digital services, while maintaining the health of our planet. And while the projects above are inspiring, the IEA’s Data Centre and Data Transmission Network Report asks for IT leaders to proceed with care and make an informed approach when adopting renewables. It suggests facilities operators start by assessing which projects will benefit the local grid by collaborating with electricity utilities, project developers and regulators. This is because achieving 100% of annual demand with renewable energy certificates (REC) doesn’t always equate to data centres being 100% powered by renewable sources around the clock. Looking ahead, the key to sustainability success for operators is to work with local renewable power operators by establishing direct Power Purchase Agreements (PPAs). Backed by a portfolio of renewable energy products, data centre providers can avoid accusations of greenwashing and claim 100% renewable use. The role of renewable energy and demand-side services Transitioning to renewables comes with the challenge of energy sources such as wind and solar being unpredictable compared to their fossil fuel counterparts. This is where demand-side services come into play, giving data centres greater energy storage capabilities while insulating the grid from faults and assisting in the optimising of electricity supplies. Demand-side services involve data centres adapting their electricity usage patterns to save on costs and reduce carbon footprint. A tariff-based scheme, the data centre can pump electricity into the power grid at times of high demand. To enable this service requires using today’s UPS technology with grid support features that can store surplus renewable energy. In addition, they provide fast frequency response services to help mitigate the network faults that tend to become more prevalent as grids move to renewable power generation. The result is that data centres can rely on stored renewable

energy during peak time and not be a drain on the grid. It also makes it possible for data centre operators to generate a new source of income by exporting surplus stored energy back to the grid, while also helping to increase network stability. A greener approach Currently, the media presents data centres as being a weight on existing electricity supplies. The use of advanced UPS technologies means data centres can reduce grid reliance and support the transition to renewable energy sources – changing the narrative. As data centres and other energy-intensive industries transition to a greener approach, UPS technology will be pivotal. And as the adoption of grid support and demand-side services within the industry takes hold, we’ll see the environmental benefits, including reduced carbon emissions. With a proactive approach being taken by operators across the industry, data centres can take a lead role in minimising carbon emissions and increasing energy efficiency, playing its part on the long road to decarbonisation.

Q4 2021 www.datacentrereview.com 15


GREEN IT & SUSTAINABILITY

Green goals Green innovation within the data centre is essential if we want our world to stay ‘on’, but as much as the importance of sustainability is no secret, how to get there might not be so clear. Here, Chris Demers, Sustainability Department Manager at Supermicro, offers some practical advice on how we can meet those green credentials and modernise our industry.

16 www.datacentrereview.com Q4 2021

orldwide, modern data centres consume between 1% and 3% of electricity. With a majority of electricity generation coming from fossil fuels in various geographies, the impact of data centres on the environment cannot be ignored. The explosion of data generated at the edge and back-end processing requirements has resulted in additional electricity demands, which produces large quantities of CO2. As data centres are in the process of expanding or new data centres planned, the amount of e-waste is growing to over 50 million metric tonnes per year, according to the United Nations. At the same time, a reduction in e-waste may not be apparent in reducing power usage, although there is a correlation. The Total Cost to the Environment, or TCE, measures how much a data centre contributes to environmental damage. Several factors

W


GREEN IT & SUSTAINABILITY

their PUE by shortening refresh cycles, acquiring more efficient servers, increasing inlet temperatures, and installing liquid cooling systems. According to Supermicro’s Data Centers & The Environment report, across the globe, data centre operators mentioned that the primary investment for their facilities in 2021 was to upgrade critical components. Secondarily, there was a range of responses, including increasing the number of servers, upgrading software components and implementing a more inclusive e-waste programme. To move towards greener data centres, an awareness of the state of a data centre is essential to understand. About 94% of the respondents were aware of the PUE of their facilities, compared to only 31% being aware just a year earlier. Concerns about the PUE of a data centre went up between the 2019 and 2020 survey.

A quick calculation shows that a single rack inlet temperature at 32°C could save between $12,000 and $13,000 per year in power costs than having an inlet temperature at 24°C

contribute to this type of measurement, including the PUE, server density, inlet temperatures and the amount of e-waste. A key to the ‘greening’ effect on the environment of a data centre is for data centre managers to have a deep understanding of the type of power being purchased (fossil fuel or renewable) and how the electricity is being used within the data centre. There are near-term and long-term decisions that can be made to reduce power consumption, leading to less greenhouse gas emissions from power generation companies. One measure, but not the only one, of the greenness of a data centre is to measure the Power Usage Effectiveness (PUE). This number, which can be pretty quickly calculated, is the ratio of the total power delivered to the data centre, divided by the power needed for the IT infrastructure. With the ideal case being a ratio of 1.0, most data centres today operate in the 1.50 to 2.00 range, meaning that 50% or more of the electricity generated goes to cooling and other non-productive systems. Data centre operators can reduce

An interesting note from the survey was that all job descriptions (C-level, IT Management and Engineering) understood the need to increase server utilisation through software utilisation as a top priority. By utilising systems at a higher level, less systems are needed for a specific set of workloads, reducing real estate and additional electricity usage. Increasing the inlet temperature to a rack reduces the amount that a Computer Room Air Conditioner (CRAC) must run to produce cooler air. However, not all organisations are embracing this change. A quick calculation shows that a single rack inlet temperature at 32°C could save between $12,000 and $13,000 per year in power costs than having an inlet temperature at 24°C. One of the more visible and understandable results of reducing power consumption in the data centre is the estimated reduction in the number of trees that need to be planted for carbon sequestration. Reducing the power consumption at the server and rack level can then be used to calculate how many trees would not have to be planted. Although there are a number of variables to consider in calculating the number of either power plants that would not be needed or the number of trees for carbon removal, it is evident that reducing data centre electricity usage can lead to less environmental damage. From choosing the type of power to be delivered (renewable to fossil fuel) to determining the type and frequency of systems to purchase, there are a number of actions that data centre operators can take to reduce their impact on the environment. Making decisions at many levels that will maintain agreed upon service level agreements, while limiting the effect on climate change and the environment will be more important as corporate social responsibility transparencies become more visible.

Q4 2021 www.datacentrereview.com 17


SPONSORED FEATURE

Box clever Rittal explains how its ‘Data Centre in a Box’ is providing a sustainable data solution for Oxford University’s Gardens, Libraries and Museums Division.

xford University’s Gardens, Libraries and Museums division (GLAM) forms one of the greatest concentrations of university collections in the world. GLAM holds over 21 million objects, specimens and printed items, constituting one of the largest and most significant collections in the world. Faced with the challenges of increased data demand, the Museum of Natural History – one of the museums within GLAM – wanted to upgrade its IT infrastructure to house core network switches, which are responsible for running the services. A major rewiring project was undertaken with the aim of significantly improving the data connectivity for computers, phones and next generation devices. The wiring presented a challenge in itself as the historically-significant listed building was not best designed to accommodate the space for conventional hardware. This required ingenious methods to work with the fabric of the building. Faced with these challenges, Anjanesh Babu

O

18 www.datacentrereview.com ??????

– the Technical Project Lead in the Gardens, Libraries and Museums IT team – researched options available. The traditional approach was for the designated network core of a building to be stripped bare and rebuilt with air conditioning and electrics to meet the requirements for the equipment. However, given the nature of the building, this would present a number of challenges, including space and cooling loss through the surfaces. The design approach was led by GLAM’s sustainability strategy. Babu approached Rittal’s IT team, who quickly identified the ‘Data Centre in a Box’ (DCiB) concept as a possible option. DCiB replicates the key data centre capabilities, but on a smaller scale – and has been developed to enable equipment to be deployed in non-traditional data centre environments. The turnkey package concept provides IT racks, demand-oriented climate control, PDU, monitoring and fire suppression. It provides a complete solution from product selection, through to installation and ongoing maintenance.

When installed in the Museum of Natural History, the cooling footprint would be significantly lower than the traditional full-room air conditioning and the absence of any work to the space to accommodate the system would mean that the building would remain relatively untouched. A site visit by Joel Farrington, Rittal’s Area Sales Manager for IT, was arranged and the requirements gathered. “The system was to be located in the museum’s basement, which had restricted access with a very narrow staircase and doorways. In addition to this, the building’s listed status would mean that any cooling equipment would have to be positioned cleverly and with the utmost consideration, not only to aesthetic but to any noise pollution emitted,” recalls Farrington. Farrington and members of the Rittal IT development team, Clive Partridge and Andrew Wreford, worked with Babu to identify key areas that needed to be achieved. “Given the kW loads and environment of the proposed location, it became clear that the DCiB’s LCU option was the best way to go, and


SPONSORED FEATURE

we quickly built up a package including racks, accessories, cooling, fire suppression, PDUs and monitoring. “To mitigate the access restrictions, we used the ‘rack splitting/re-joining’ service which enabled us to resolve the challenge of space limitations of the project,” says Partridge, Rittal’s Technical IT Manager. Rittal provided an end-to-end solution from the manufacture of kit, to the installation, commissioning and hand-over. To overcome the issues with the listed building status, Rittal’s IT team worked in collaboration with Babu and the lead contractor,

DCiB replicates the key data centre capabilities, but on a smaller scale – and has been developed to enable equipment to be deployed in non-traditional data centre environments. Monard Electrical, to find a suitable home for the condenser. “Rittal’s DCiB allowed the museum to utilise the proposed location without having to make costly building modifications, thus

saving time, energy and effort,” reflects Babu on the options deployed. By adopting ‘in-rack’ precision cooling instead of ‘in-room’ cooling, the location is more environmentally efficient and this controls operational expenditure. Cooling via the high-performance LCU option provides temperature consistency, allowing better care of equipment along with nearly silent operations. Not only is the installation providing energy efficiency and longevity for the museum, there is the added benefit of noise reduction in the room compared to an existing server room utilising in-room cooling. Haas Ezzet, Head of IT GLAM, contextualises this piece of work as being part of the, “Museum’s drive towards greater environmental sustainability. The approach piloted here, of focussing climate control specifically to the area needed – the data cabinet – rather than the entire space in which it is housed, will optimise energy consumption and afford a blueprint for other spaces within GLAM and beyond.”

???????? www.datacentrereview.com 19


EDGE

Protecting power at the edge Chris Cutler of Riello UPS explores whether lithium-ion batteries are the ideal energy storage solution to power the next generation of edge computing applications. ne of the undisputed consequences of these last 18 months or so is the speeding up of what was the already rapid digitalisation of our society. Throughout the pandemic, the internet and always-on connectivity have mattered more than ever in both our personal and professional lives. And while it’s too soon to say whether certain practices, such as remote working, become a permanent feature, it doesn’t take a crystal ball to see that our reliance on all things digital and data will continue to increase as phenomena such as the Internet of Things (IoT), 5G and smart tech embed themselves even more in all our lives. Last year, the world consumed 59 ZB of data. But Statista anticipates this figure will rocket to 149 ZB by 2024. Such growth shouldn’t come as too much of a surprise though. Within the next five years, it’s predicted that the average person will interact with an IoT or connected device nearly 5,000 times a day. Or to put it another way, once every 18 seconds.

O

20 www.datacentrereview.com Q4 2021

Shift in processing power To fully reap the rewards of this technological revolution relies on low latency and near-instantaneous processing power. That’s something traditional enterprise data centres and cloud facilities struggle to deliver. It’s therefore been necessary to radically change the way we process data. Rather than relying on clogging up bandwidth by sending information all the way to servers hundreds or even thousands of miles away and back again, more data now gets processed close to where it’s generated in the first place. It’s quicker, reduces bandwidth, and also allows more data to be stored locally, reducing the risk of hacking or data corruption. In essence, this is ‘The Edge’, and it requires a network of mini data centres close to where the data is created to fully unleash its power — whether that’s in the car park of a manufacturing plant, in a spare room at an office complex or retail store, or a roadside cabinet in a heavily populated area.


EDGE

Challenges of exploiting the edge Now of course, in practice, it isn’t feasible to construct edge facilities on a similar scale to a typical hyperscale data centre. Many edge installations are found in space-restricted settings where it would be impossible – and unnecessary – to build a multi-megawatt bit barn crammed with server racks. While they still need to carry out all the same functions as any hyperscale or cloud facility, they need to do it in a much smaller footprint, often in challenging environmental conditions. That’s why in recent years, there’s been significant growth in what’s known as micro data centres. These structures, often housed in fire and weather-proof steel shipping containers, incorporate all the equipment you’ll find in a typical data centre, such as PDUs, servers and air conditioning. They’re just on a much smaller scale. As with any other type of data centre, edge deployments depend on clean, continuous power, as well as having sufficient backup in place to overcome any electrical outages. That makes uninterruptible power supplies a fundamental part of any installation. Ongoing advances in UPS technology mean both modular and transformerless solutions are ideally suited for protecting edge applications, thanks to their combination of high performance in a compact footprint. But a UPS is only as good as the stored energy it holds in reserve in case of emergency. Typically, this comes in the form of valve-regulated lead-acid batteries (VRLA). These blocks of batteries are reliable and relatively inexpensive performers, but they come with well-known limitations too. They’re big and bulky, which doesn’t exactly fit in with the compact nature of the edge. With operators needing to squeeze every last square foot of space with processing-heavy servers and storage, there’s minimal room for backup power. VRLA batteries also perform at their best in highly-regulated, temperature-controlled environments, which can be difficult to recreate inside a data centre squeezed into a shipping container. Exploring the alternatives So, is there an alternative to the traditional UPS battery that’s perhaps more ‘edge ready’? While it would be an exaggeration to describe lithium-ion (li-ion) batteries as a mainstream solution, they are certainly not the under-the-radar option they were seen as just a few short years ago. And for edge installations in particular, where space is at a premium, they offer superior trade-offs in terms of size, performance and maintenance compared to VRLA. To start with, li-ion has a much better power-to-weight ratio and far higher power density. Critically, this means it needs just 50-75% of the footprint to deliver the same power as standard UPS batteries. And they’re lighter too, so they’re easier to transport and won’t require any reinforced floors. Tolerance to temperatures Another big advantage of li-ion is its tolerance to higher ambient temperatures. VRLA cells only operate at their best in regulated temperatures of 20-25oC, whereas lithium-ion blocks can operate safely up to 40oC.

This naturally reduces any data centre’s need for energy-intensive and expensive cooling or a standalone battery room. But it’s perhaps an even bigger benefit for edge applications, where space is scarce and installations are often located in places where it’s challenging to control the environmental surroundings. Lithium-ion batteries also have an operational lifespan of 15-20 years. During this timeframe, VRLA batteries would probably need replacing two or potentially three times. And even though there’s still an upfront price premium with li-ion compared to VRLA, it’s generally accepted they will deliver total cost of ownership savings (TCO) ranging between 10-40% over the course of a decade, due to factors such as reduced service visits and lack of replacement costs.

While it would be an exaggeration to describe lithium-ion (li-ion) batteries as a mainstream solution, they are certainly not the under-theradar option they were seen as just a few short years ago There are a couple of additional points to bear in mind. Unlike VRLA batteries, li-ion blocks must be used in conjunction with a battery management system (BMS) that monitors each cell to maintain balanced states of charge. This real-time monitoring enhances reliability by promptly identifying any early signs of cell deterioration. It also reduces the need for manual maintenance visits, which is a positive for edge applications where it can often be tricky to send a service engineer. Finally, lithium-ion offers a higher number of cycles – up to 50 times compared to VRLA – and much faster charge/discharge times. This makes them the perfect partner for the smart grids that society will likely come to depend on in the years to come, where energy storage helps to dynamically balance electricity supply with demand in real-time. Meeting future demands Edge processing will undoubtedly prove pivotal in unlocking the true potential of technologies such as IoT and 5G. Indeed, the global edge market is likely to top $250 billion by 2024. But such dependency on distributed IT infrastructure will only succeed if it is reliable and available 24/7, 365 days a year. While barriers such as higher upfront costs and challenges with recycling remain, lithium-ion battery storage is fast-becoming a viable alternative throughout the wider data centre market. And thanks to the winning combination of long lifespan, compact footprint, wide temperature tolerance and ease of maintenance and monitoring, they are particularly suited for space-restricted and environmentally-challenging edge installations.

Q4 2021 www.datacentrereview.com 21


EDGE

Keeping the edge Pascal Holt, Director of Marketing at Iceotope Technologies Limited, explains why edge computing reliability is a vital component of AI.

Any device in any location that is gathering, analysing and acting on data is a component of edge infrastructure

22 www.datacentrereview.com Q4 2021

dge computing is changing the way we process data. The need to handle, manipulate, communicate, store and retrieve data efficiently is successively moving processing capacity closer to the user than ever before. In fact, the rapid expansion of edge computing facilities is expected to increase five-fold to $11 billion in investment by 2026. The data centre is no longer the centre point of data. All types of industries - from healthcare and agriculture to retail and more - are driving this trend. Any device in any location that is gathering, analysing and acting on data is a component of edge infrastructure. IDC estimates the number of connected devices will reach 55.7 billion by 2025. Gartner predicts that by 2023, more than 50% of enterprise-generated data will be created and processed outside the data centre or cloud. Those figures don’t even take into account the fact enterprises are still adjusting to a more distributed workforce with long-term work from home practices because of the pandemic. Artificial intelligence (AI) is one key application increasing the requirement for digital infrastructure and processing power. AI, according to Google CEO Sundar Pichai, will have a more profound effect on civilisation than electricity or fire. Similar to the industries driving the

E


EDGE

push to the edge, AI applications and technology are being deployed in a wide variety of use cases to ensure the reliable delivery and continuity of services based upon data outputs. AI applications are driving the edge The recently announced study using AI to diagnose dementia in patients in as little as a day is one such case. While the trial was initially conducted in hospitals and memory care facilities, the next phase of the study will be to test it in clinical settings alongside conventional ways of diagnosing dementia. Dr Laura Phipps at Alzheimer’s Research UK noted that the AI systems were “drawing on the insights from huge datasets to help doctors make more informed decisions about diagnosis, treatment and care.” Instead of relying on interpretation of scans, machine learning models will help lead to more accurate diagnoses. Currently, AI workloads like this generate large data sets that require complex calculations and data processing. As a result, they need to leverage high-power density GPUs which demand a high level of resiliency and processing power, particularly at the edge. In the near future, as AI becomes ubiquitous in all applications, it will be commonly running on standard, low-cost commercial server platforms. The challenge is that edge computing loads are usually required to operate reliably in locations not built specifically for IT equipment. At the same time, taking servers out of protected, environmentally-controlled data centre technical white space can have multiple impacts upon reliability, efficiency, monitoring and service operations. In addition, placing such loads in harsh environments also exposes them to the effects of humidity and high temperature, emissions, air particles, vibration from industrial machinery, corrosion, etc.

Cooling technology solving challenges of high-density technology The use of technology to solve the challenges being created by technology is being mandated today. Precision chassis-level immersion cooling, for example, enables an ideal environment for IT equipment to be installed and successfully operated in diverse locations. Traditional cooling techniques use forced air to cool equipment, putting equipment at risk in those harsh environments. A fully sealed precision immersion cooling solution delivers reliable server operations by isolating sensitive electronic components and circuits from harmful gaseous and particulate contaminants. Precision immersion can also extend the operating life cycle of hardware by reducing service and maintenance call-outs. The plug and play nature of a sealed chassis enables a consistent service and support model. Servers can be monitored and managed remotely. A technician who can replace a module at the data centre campus can just as easily make the same replacement in a remote location. When swapping out the chassis as a complete module, the service call is simplified and eliminates exposure to environmental elements on-site, de-risking service operations. Sound pollution is a major consideration with edge computing. Depending on the environment, IT equipment noise needs to be as minimal as possible. Precision cooling eliminates the requirement for server fans and the required HVAC equipment is significantly reduced. This enables near-silent server operations. The noise then becomes more comfortable for the non-IT tenants sharing the building, as well as the IT teams moving about the space. It also helps detract attention from the equipment to deter theft, vandalism, etc.

The use of technology to solve the challenges being created by technology is being mandated today Finally, for an industry with well-publicised ambitions to provide carbon net-zero operations, sustainability at the edge is a critical component to any data centre strategy. Advanced liquid cooling solutions are capable of achieving 1.03 PUE or below. Precision cooling captures >95% of server heat inside the chassis, significantly reducing energy costs and emissions associated with server cooling. Water consumption is negligible as little to no mechanical chilling is required. As our demand for data increases, how and where we process data will continue to evolve. Edge computing is just beginning to demonstrate its impact. As the expanding network of data centres continues to grow, access to power, cooling and connectivity will become even more important. The industry must turn to technology to mitigate challenges being caused by technology. Precision immersion liquid cooling is fast becoming one of those critical solutions for delivering reliability and energy efficiency while responding to increased processing and storage requirements.

Q4 2021 www.datacentrereview.com 23


DATA CENTRE DESIGN

Making the most of modular

Louis McGarry, Sales and Marketing Director at CENTIEL UK, explores how modular whole concept data centres could help reduce total cost of ownership. he shift towards modular UPS design continues. The results from our recent online survey were unsurprising, with 75% of respondents confirming that modular UPS architecture is the preferred choice for system design. So, based on these findings, does this mean that a quarter of data centres are still being designed and built with standalone architecture? And if so, why?

T

24 www.datacentrereview.com Q4 2021

One argument I hear to support this approach is that it is faster, cheaper and easier to install large standalone UPS blocks in data centres, to achieve maximum power from day one. Sure, designing this way may seem logical. However, installing the infrastructure to run and support a facility at full capacity from the outset may not actually be the most cost effective. What if the facility never reaches full capacity, or even half? Under-utilised systems throughout the data centre will still need to be run, maintained and managed, which can dramatically increase the total cost of ownership (TCO) over time. This doesn’t only apply to the UPS system. Another argument may be that perhaps a ‘modular’ UPS has been tried in the past, but it did not deliver on its promises. Such an experience is used to validate the continued use of standalone systems. However, in these instances, the UPS installed may have only had a ‘modular in-


DATA CENTRE DESIGN

fluence’, where the technology had been developed to include some basic modular benefits. The likelihood is that this wouldn’t have delivered the full benefits of a system designed with a true modular architecture. It’s easy to say, ‘buyer beware’ or ‘read the small print’, but the truth is that there are many UPS products sold as “modular” and understanding the differences is vital. Configuration is key Modular simply means it’s possible to swap certain elements out in a live system, increasing availability. However, the elements of commonality in a centralised solution, leading to single points of failure, are open to huge variations between manufacturers. Single points of failure might include: a CPU, inverter or static switch. Regardless of redundancy, a centralised control logic making decisions for the whole system, one centralised bypass and one communication channel means if any of these components fail, the load could be lost. Choosing true modular with a decentralised and distributed architecture adds far more layers of resilience. This offers the highest level of availability because all of the components are replicated throughout the system and at module level. A completely distributed architecture means each UPS module includes rectifier, inverter, static bypass and control logic. No single module takes control of the decisions for the whole system – instead, distributed decision making takes place to eliminate the logic’s single point of failure.

Do these data halls need to be built on such a large scale and instead would it be better to divide them into smaller, more efficient, manageable spaces of say 100 kW, 200 kW or even 500 kW? Controlling costs One of the challenges when installing a standalone UPS can be the lack of flexibility which leads to an overall higher TCO. I’ve stood in many football pitch-sized facilities marvelling at the vast amounts of energy being devoured to support an under-utilised site. As well as concerns about the carbon footprint of the data centre, the cost of wasted energy to keep such a facility running constantly is astronomical. In addition, if you look at the actual power being consumed by each server compared to the rating on the base plate, it asks the question, does data really burn as much power as we predict? Particularly with the continuous improvements in technology. From my experience, it is unlikely that a full server rack is pulling 5kW – realistically it’s only around 1-3 kW. Therefore, do these data halls need to be built on such a large scale and instead would it be better to divide them into smaller, more efficient, manageable spaces of say 100 kW, 200 kW or even 500 kW? In my view, the best run data centres are those that deploy a modular

concept across the whole facility. These sites are divided into rooms which are in effect mini data centres, each containing all the elements of a data centre but on a much smaller scale. If a data centre was designed and built ready to take on 4 MW but only achieves 1 MW, it is clearly not running at its optimum efficiency, as it is dramatically under-utilised. On the other hand, with the whole modular concept, if the space in a particular room is no longer required, equipment and the associated infrastructure can be powered down or put into ‘hibernation’ to minimise energy consumption and cost. The room and its infrastructure remains available and ready for use by future clients. If security is a consideration, these smaller rooms are also easier to manage, particularly when not in use. The ideal power product for this type of data centre set up is a true modular UPS. Servicing can take place either on or off site as modules can be removed and replaced easily. In the most modern modular systems, modules include features such as active-sleep mode and Bluetooth technology, while endless monitoring linked to the BMS can enable remote monitoring too. Make the right choice For the whole modular data centre concept to work, purchasers need to select quality UPS systems with a high-power density and the highest level of resilience which is designed to work in a small footprint. A robust UPS where modules can be redeployed easily offers a clear advantage to optimise load management throughout the facility too. There is always a temptation for buyers to purchase another UPS like the one they’ve used before. However, technology has moved on. The most modern modular UPS are more available, reliable, flexible and efficient, and can dramatically reduce the TCO compared with standalone technology, which is difficult to maintain and expensive to run. Our best advice is to gain first-hand experience of the equipment rather than making a decision from a brochure. Go to see the equipment in operation, witness how it is manufactured and tested to prove the quality of components used as this is invaluable. The UPS is the beating heart of any data centre and it’s got to work. A UPS is also a valuable asset and can become a costly one if a wrong decision is made. Making the right choice when selecting a UPS system means careful research and working in close partnership with manufacturers, contractors and consultants to ensure all the requirements for critical power protection and usage are met.

Q4 2021 www.datacentrereview.com 25


DATA CENTRE DESIGN

The key to sustainability

An effective energy strategy can optimise efficiency, support procurement strategy and offer pathways to cooperate with energy companies and grid operators, says Andrew Toher, Head of Customer Insights at Enel X UK.

26 www.datacentrereview.com Q4 2021

D

espite an exponential growth in data and workloads, data centres have done an exceptionally good job of managing energy demand, which has increased only modestly over the past 10 years. Better energy efficiency measures in smaller data centres and highly efficient hyperscale data centres have gone a long way to limit energy demand. While data centres account for 1% of the world’s energy usage, the sector is responsible for only 0.25% of emissions. On the supply side, electricity grid operators have maintained grid stability and generated sufficient power to meet growing data centre needs in a rapidly changing energy landscape. However, with a desire to build new data centre capacity, and a continued trend for locating new data centres within established geographical clusters, sustainable growth now requires closer strategic partnerships between data centres, energy companies and the grid.


DATA CENTRE DESIGN

Pathways to low-carbon energy There are many pathways to low-carbon energy, and data centre operators must decide what strategy is right for them. The range of options vary from simple contractual agreements (such as Guarantees of Origin) with green energy suppliers to more active involvement in the operation of the energy grid. At the passive end of the decarbonisation spectrum, energy suppliers use Renewable Energy Guarantee of Origin (REGO) certificates to calculate their Fuel Mix Disclosure and demonstrate how green their tariffs are. However, suppliers can trade REGOs separately from the unit of electricity that came from the renewable asset and attach them to fossil-fuel generators. Critics of REGOs argue that they fail to incentivise the building of new renewable sources of power. Growth in renewable energy is set to continue as countries pursue net zero carbon targets. According to the International Renewable Energy Agency (IRENA), over 75% of onshore wind and 80% of solar PV project capacity was due to be commissioned in 2020 without subsidies and should produce cheaper electricity than any coal, oil or natural gas option. With renewables growing as a proportion of the overall energy mix, grid operators are finding it more difficult to modulate supply to meet demand. Instead, they are looking to increase the flexibility of the grid on the demand side through a number of strategies – either storing, shifting or transporting electricity. Today, more than ever, grid operators need active participation from their largest energy users. This requires businesses to embrace a degree of flexibility in their energy usage that supports network operations but does not negatively impact on business operations.

tors have actively embraced DR. They see participating in DR as part of their resilience strategy as they receive advanced notice of grid problems and can prepare for trigger events or pre-emptively move to back-up power for the duration of the grid instability. On-load testing that more accurately replicates the utility’s fail risk is another benefit of working with a partner to participate in flexibility schemes. As well as being a means to more robust resiliency measures, DR provides data centres with capacity payments simply for being on standby; valuable income to help offset energy costs.

Sustainable growth now requires closer strategic partnerships between data centres, energy companies and the grid

Renewable energy leadership The industry is showing strong leadership on corporate renewable procurement; the top four corporate off-takers of renewables in 2019 were all ICT companies. At the hyperscale end, heavy energy use is driving them closer to grid operators – often by working with partners to develop strategies that enhance their corporate reputations. Data centres operating across all business models are looking to Power Purchase Agreements (PPAs) as a route to good grid citizenship. PPAs enable off-takers to procure long-term contracts with operators of renewable assets. However, negotiating PPAs can be technically complex. Some key PPA parameters include the term of the agreement; whether the PPA is a corporate arrangement, includes a private wire and/or storage; and how risk is allocated between procurer and generator, including the volume risk. Optimising these parameters to deliver a bespoke agreement that suits both generator and off-taker requires depth of knowledge and experience.

Broader energy initiatives Energy management initiatives extend beyond the matter of supplying power to the data centre itself. Increasingly, businesses are seeking holistic approaches to manage their energy needs. For example, as the use of electric vehicles grows, workplaces are integrating charging infrastructure for employee and visitor use. Smart EV charging can play a role in grid balancing by integrating these more flexible, non-critical loads into an overall energy efficiency plan. A holistic approach to managing energy typically incorporates efficiency measures, alongside other initiatives such as PPAs, DR, EV infrastructure and even utility bill management, which provides detailed insights into energy spend that can inform planning. While major energy users see the benefits of planning a broad approach to energy strategy, finding capital to fund the measures can be a barrier to moving forward. A potential solution to address the funding barrier is to work with a specialist partner on an ‘as-a-service’ model. Energy-as-a-Service, or EaaS, helps to overcome the issue of having to find capital to fund improvements and forges a long-term relationship with a partner who can advise and deliver on PPAs, flexibility solutions, energy efficiency measures, utility bill management and so on. As well as monetising the flexibility of energy assets and reducing costs, EaaS enables profitability, improved resiliency, sustainability and better risk management – especially with respect to compliance and market exposure.

Demand Response as a resilience strategy Using time or price triggers to shift demand away from peaks, by implementing demand side response mechanisms, has proved to be a successful flexibility strategy for grid operators for several years. Demand Response (DR) is a good fit for data centres but there are still questions around how participation works with some business models. For example, co-location data centres operate within strict customer service level agreements, and operators are understandably cautious about adopting measures that are perceived as a threat to uptime. On the other side of the debate, some co-location data centre opera-

Driving grid innovation As large energy consumers and one of the fastest growing users of power, data centres have the potential to make a positive impact on grid innovation. Given the data and power industries’ interdependence, many now advocate that it’s time for data centres to contribute to the overall stability of electricity systems by becoming better grid citizens. Data centre operators have a choice of energy strategies to meet their individual business models and energy demand profiles. Following the right path is key to meeting decarbonisation goals while maintaining industry growth, profitability and resilience.

Q4 2021 www.datacentrereview.com 27


SPONSORED FEATURE

Delivering reliability HARTING provides reliable connectivity solutions for data centres.

T

he worldwide data centre market is experiencing explosive year-on-year growth as our reliance on remote working, computer apps and the Internet of Things increases at a staggering rate. In addition, the changes to our working lives caused by Covid-19 have meant that businesses and individuals need reliable access to data to allow them to embrace new ways of working. Therefore, as we become more reliant on remote or hybrid working models, it’s essential that data centres run as smoothly and efficiently as possible. One method of improving reliability is by using ‘plug and play’ connectors and pre-assembled cables, which can reduce maintenance time and the overall cost of ownership when compared to hard-wired connections. In addition, field labour is expensive and wiring errors

28 www.datacentrereview.com Q4 2021

can cause additional costs and downtime. By switching to a HARTING plug and play solution, you can eliminate costly wiring errors and simplify your installations. The Han-Eco from HARTING is an electrical connector which ensures critical power to data systems via a quick and easy installation process. The Han-Eco system can support either power inserts with a built-in ground for safety, or an unparalleled choice of modular inserts, including data, signal and power options, in a single connector. What’s more, the Han-Eco will integrate into power distribution units of the future, ensuring safer power connection points, space savings and decreased downtimes. The range is manufactured from high-performance plastic, which complies with IEC 61948 and EN 45545-2 standards, and offers IP65 protection and substantial weight savings com-

pared to traditional metal housings, which makes location mounting easier and safer. Additionally, the hoods and housings are suitable for both indoor and outdoor applications, thanks to their resistance to environmental impacts. HARTING also offers connectorised cable assemblies, which distribute power from the data centre’s uninterruptible power supply (UPS) to the power distribution units (PDUs), a streamlined process which reduces costs and improves profitability. These assemblies consist of a cable between one or two connector hoods. Inside the connector is an insert or multiple inserts where the conductors from the cable are terminated. The connector hoods then mate with a matching housing wired to the PDU and/or UPS. When designers hard-wire the conductors inside the cable, this can cause numerous issues for the data centre operator. Firstly, a costly skilled electrician is needed to disconnect and reconnect a hard-wired PDU. Secondly, if there is an error during this process, which is not uncommon, there are additional costs for troubleshooting, as well as unplanned downtime, resulting in lost revenue. When using cable assemblies, there is no need to hire an electrician and, since everything is pre-wired and pre-tested, wiring errors are virtually eliminated. In addition to improved installation, cable assemblies also offer benefits during the design and prototype phase and make access for maintenance easier. HARTING has 40 years’ experience of building cable assemblies and can offer standard or custom cable lengths and a range of plastic or metal housings. All customised products are built at the company’s manufacturing facility in Northampton and are based on market-leading connectors from the extensive HARTING range, including Han-Eco and modular solutions. HARTING can also support you with thorough in-house testing of cabling and wiring. No matter what the degree of complexity, it has the capacity to produce project-specific assemblies to suit your needs, including installing components, efficiently


SPONSORED FEATURE

routing cable harnesses and fabrication. To further assist you, HARTING’s in-house design team can create a 3D CAD visualisation of your build beforehand to ensure absolute accuracy before manufacturing begins. HARTING’s manufacturing facility holds ISO 9001 certification for Quality Management, the ISO 14001 Environmental Standard and UL certification for Wiring Harnesses ZPFW2 / ZPFW8. Within data centres, generators are required to provide backup power if the main source is interrupted, for example, in the case of electricity outages. Alongside the aforementioned growth in demand, these systems have contributed to an increase in the amount of energy required to support data centres. One solution is the sustainable use of renewable energies, such as wind or solar power. However, these renewables can only be harnessed via energy storage modules, as they enable time-delayed, needs-based use of the collected energy. The new Han S connector range from HARTING makes it easier to connect large arrays of battery storage modules and offers secure connection technology for modular battery storage systems. The compact and flexible housings accommodate contacts for currents up to 200A and 1,500V and bulkhead housings can be flexibly rotated through 360 degrees. The use of connectors also speeds up the construction of energy storage modules, which use lithium-ion cells. Demand for this method of electricity storage is booming worldwide; according to

More companies are recognising the huge benefits of storage modules, which include maintaining an uninterruptible power supply and making savings on energy costs by using delayed load usage

the Federal Association for Energy Storage, the European market has seen rapid growth in recent years, particularly for home and industrial battery storage. As a result, more and more companies are recognising the huge benefits of storage modules, which include maintaining an uninterruptible power supply and making savings on energy costs by using delayed load usage. As demand continues to increase, providers of lithium-ion storage systems need to connect a number of cells. The Han S is specially designed to meet this need, providing simple, fast and safe linking of cells for these systems. The Han S offers users plug-in connections for storage modules while providing maximum safety, since the design meets all technical requirements and is based on the latest UL 4128 standard for stationary energy storage systems. The entire range meets the highest standardisation level required by the international market. The male contact for the battery module is designed to be finger safe and is equipped with a screw contour with an M8 thread, meaning the contact can be easily installed on the battery module with a socket spanner. The socket

contact is crimped onto the cable and then inserted and screwed into the sleeve housing. These properties are also beneficial for service and maintenance. If a cell in an energy storage module shows a drop in performance, the respective management system can be used to switch it off and have it replaced. To do so, the Han S interfaces of the adjacent modules are simply turned to the locked state to make room for the switch-out. This procedure can be done without interrupting the energy storage functions of the neighbouring modules. Han S is the first high-current battery connector that meets the relevant UL standards for stationary energy storage systems. Among others, it fulfils the requirements of UL 4128 for connectors in electrochemical battery system applications, UL 1973 for batteries in stationary applications and UL 9540 for energy storage systems and accessories. HARTING also manufactures numerous solutions for data network cabling, including the ix Industrial connector, which offers significant internal board-to-cable and I/O panel space saving. This allows more ports to be installed in the same space, meaning data centre operators can install smaller server racks, reducing their cable maintenance costs. For the classic RJ45, the company offers the Ha-VIS preLink RJ45 system, which speeds up and improves the reliability of on-site data network cabling repairs. It can be wired with total safety in one step using HARTING’s preLink crimping tool, ensuring a reliable cabling connection. The termination blocks can also be supplied to custom lengths as pre-assembled single or double-ended patch leads, making them simple to install through existing data centre cable trays or conduits. To learn more about HARTING’s wide range of connector and cabling solutions for data centres, please visit www.harting.com/UK/ en-gb/datacentre-solutions or send an email to salesuk@HARTING.com and one of our experts will contact you.

Q4 2021 www.datacentrereview.com 29



INDUSTRY INSIGHT

Industry Insight: The downtime dilemma How can you achieve near-zero downtime migration for SAP systems? Eamonn O’Neill, Co-Founder and Chief Customer Officer at Lemongrass Consulting, explains.

The criticality and complexity of SAP systems in the grand scheme of Enterprise IT is widely known and accepted – and perhaps even viewed with a dose of trepidation. Just thinking about making even the most innocuous changes to SAP workloads or associated infrastructure can cause pulses to race and nerves to rattle. Especially if it involves any system downtime. Even making the changes and updates needed to maintain system health, stability, and security fall victim to the ‘catch 22’ of not being able to get the required downtime from the business to make these critical changes. With the right approach, though, it is possible to drastically minimise scheduled downtime and its impact to the business. It’s important to define what’s meant by ‘near-zero’ downtime. The term itself isn’t new, but the expectations over the years, as well as the need to optimise downtime to the business, has grown significantly. We live in an ‘always on’ culture, with global firms working across countries and time zones to keep productivity and utilisation levels at their highest. However, it’s important to be realistic. There will usually be some downtime resulting from changes that are needed to adapt and evolve to remain competitive – but the amount of acceptable downtime is unique for each business. So, essentially, near-zero downtime means creating the shortest possible acceptable period (or periods) of business interruption with the goal of having almost no impact to any workloads running on critical systems. So, how do SAP customers begin to establish their approach to a near-zero downtime migration?

Near-zero downtime means creating the shortest possible acceptable period (or periods) of business interruption with the goal of having almost no impact to any workloads running on critical systems A goal without a plan is just a wish To begin, gathering a detailed report on your current application infrastructure, SLAs and architecture is imperative in order for you to understand where you are now and the steps needed to get to where you’d like to be. By developing checklists, task lists and detailed orchestration and execution frameworks that support progress and milestone-based reporting, nothing is left to chance. You should workshop concerns and validate your plan with a proof of concept (POC) so that full transparency is established across your business. Confidence from preparing a methodology, with a well architected framework, that has business leader buy-in baked in means that there are fewer surprises along the way. In addition, the knowledge gathered in this critical exercise helps to identify some early, quick wins and sets you up for a migration with the least amount of downtime.

Q4 2021 www.datacentrereview.com 31



INDUSTRY INSIGHT

Achieving near-zero downtime Large, heterogeneous migrations are notorious for struggling to achieve near-zero downtime. In fact, databases that exceed 5TB often grapple with technical downtime windows which can often exceed five business days due to factors such as including sourced hardware capabilities, cloud connectivity capacity and data volume. The benefit of leveraging a migration platform that was built to support near-zero downtime is reducing business disruptions to an acceptable level. This can be achieved by generating an in-depth analysis of SAP table structures to identify if you can be migrated online in advance of a cutover weekend. In addition, using other tools, such as LCP Migrate, can guarantee a maximum technical downtime to fit into even the most constrained windows. Because tools like LCP Migrate leverage SAP standard tools, such as R3load, R3ta, migmon and others, LCP Migrate is able to ensure continued support from SAP. A migration was recently conducted by Lemongrass, as an example, for a large retail company that only experienced 12 hours of technical downtime by using LCP Migrate, instead of four or more days on average by not using the tool, despite having a 31TB database. Test and learn Leveraging the cloud makes it easy to conduct a small POC to test outcomes, understand the implications, and then move forward. By making the most of this flexibility, organisations can test and learn in real-time and make the changes necessary to keep any project moving forward. Adopting test-driven development practices that exhaust all possibilities means that any potential issues are dealt with before the real work needs to begin. Automate always Automation is essential to near-zero downtime in order to lower risk and accelerate timelines. Through automation, teams execute tasks at a much faster pace while at the same time reducing manual error. Not only

does this reduce downtime, but it also frees up resources to focus on the specialist areas that cannot easily be automated and that may need the attention of experts before they can be transitioned. Other tips and tricks In addition, your migration project can be combined with some housekeeping, including upgrading outdated​systems and the removal of technical and/or historical data that is no longer needed. Optimising the systems and migrating only the data that is needed results in less time being required for the migration itself. You can also reduce technical downtime requirements by migrating application tables partially during business uptime or, if your cloud provider has them available, by using robust, reusable migration pattern assets for a faster, more seamless transition.

Automation is essential to near-zero downtime in order to lower risk and accelerate timelines It’s time for near zero downtime An experienced SAP service provider can suggest an array of activities that will ensure your IT modernisation project not only delivers to your expectations but also, with the right planning, does so in the absolute least amount of impact to your business, customers and stakeholders. Working with a tenured team of SAP professionals who have found success in migrating, operating, innovating and securing enterprise systems is key to achieving not only near-zero downtime, but a SAP system that is efficient, lean and innovative.

Q4 2021 www.datacentrereview.com 33


If you sell products or work on projects within Power, Lighting, Fire Safety & Security, Energy Efficiency and Data Centres, make sure you enter:

Visit awards.electricalreview.co.uk Your business and your team could be celebrating in the spotlight at the ER & DCR Excellence Awards Gala Dinner on May 19, 2022 at the breathtaking Christ Church Spitalfields in London!


Sponsors:

Entertainment Sponsors:

Following the success of our second annual Excellence Awards in 2019, we invite electrical manufacturers, contractors and project owners to enter the 2022 Awards. The recipients of 2019 Awards included: TXplore robotic transformer inspection service – Entry by ABB Power Grids; UPS, temperature control, monitoring and diagnostics at Cineca, Italy - Entry by Vertiv; BS67 Smart Ceiling Rose - Entry by Adaptarose Ltd; University of Northampton - Entry by Simmtronic Lighting Control; The Hot Connection Indicator - Entry by Safe Connect; Combined heat & power at DigiPlex Stockholm data centre - Entry by DigiPlex; Refurbished servers at WINDcores, Germany - Entry by Techbuyer; HyperPod Rack Ready Data Centre System - Entry by Schneider Electric; 4D Gatwick cooling upgrade - Entry by 4D Data Centres Ltd; Green Mountain Colocation Supplier - Entry by Green Mountain

SPONSORSHIP OPPORTUNITIES AVAILABLE

Contact us +44 (0) 207 062 2526 The Awards include the following categories: Power - Product of the Year - Sponsored by Omicron Power - Project of the Year - Sponsored by Omicron Lighting - Product of the Year Lighting - Project of the Year Fire Safety & Security - Product of the Year Fire Safety & Security - Project of the Year Energy Efficiency - Product of the Year Energy Efficiency - Project of the Year Innovative - Project of the Year Sustainable - Project of the Year Data Centre Design & Build - Product of the Year - Sponsored by Centiel Data Centre Design & Build - Project of the Year Data Centre Cooling - Product of the Year Data Centre Cooling - Project of the Year Data Centre Colocation - Supplier of the Year - Sponsored by Vertiv Technical Leader of the Year Consultancy/Contractor of the Year - Sponsored by ECA Outstanding Project of the Year - Sponsored by Riello UPS Outstanding Product of the Year - Sponsored by Riello UPS

Visit the website to check out this year’s awards and submit your entries by March 6, 2022.

awards.electricalreview.co.uk



PRODUCTS

New XCP High Power Busbars from Starline

S

tarline has unveiled the XCP High Power Busbar, designed for data centre grey space. The busbar is designed for installations that require 630-6,300A for continuous duty at 1,000V and ambient operating temperature up to 55°C – all while being deployed within compact dimensions. Complementing the operational features, the XCP High Power Busbar’s proprietary software guarantees fast and accurate layout/ REVIT development. The new XCP High Power Busbar offers: Flexibility: Extra-compact, efficient, and easy-to-install with features and accessories that help simplify the planning and design of mission-critical infrastructures. With copper

or aluminium conductors, a full range of components and power connections, as well as tapoffs up to 1,250A, the XCP product line can be configured and customised for any project

design, including multi-floor distribution. Performance: A unique design using highend materials that enables industry-leading ambient temperature ratings, low electrical losses and negligible electromagnetic emissions, resulting in world-class reliability and cost savings. Safety: Certified and manufactured in accordance with IEC 61439-1 and 6 and contains features such as fire and seismic resistance, high short-circuit ratings, excellent ingress protection, and superior insulation technology to help ensure the operations team safety. Starline • info@starlinepower.com www.starlinepower.com

Go beyond building automation

T

he ILC 2050 BI industrial controller from Phoenix Contact is designed for the most demanding applications in buildings, infrastructure and data centres, using the Niagara 4 Framework. The integrated Niagara Framework enables IIoT-based automation through standardisation of various data types. This makes it easy to connect with various sensors and actuators, regardless of the manufacturer and communication protocol. • Minimised commissioning costs thanks

to different protocols on one controller • Easy programming using drag-and-drop within the Niagara 4 Framework • Real-time, on-premise analytic control

thanks to integrated Niagara Analytics • Cost-effective operation thanks to web-based maintenance, monitoring, and programming • Offers a compact footprint in the panel, and therefore cost savings. Go beyond building automation with the ILC 2050 BI providing industrially-hardened control and modular I/O running the Niagara 4 Framework. Phoenix Contact info@phoenixcontact.co.uk www.phoenixcontact.com

Kohler launches range of Power Optimised Design Solutions

K

ohler has launched its range of walk-in Power Optimised Design Solutions (PODS) in response to the increased market demand for high-power gensets. They offer the highest standards of performance, reliability, robustness, safety, modularity and competitiveness. Crucially, their size allows for enough internal cooling power to accommodate Kohler’s KD SERIES generators, giving customers the power to utilise the most powerful generators on the market without compromising on installation and maintenance. The walk-in PODS have generous spacing

with a 4m width and height for the base module. The enclosures offer optimal access to different elements of the diesel genset. Single swing doors with locks and anti-panic bars facilitate daily access into the enclosure. A push-button located near the access doors

controls the interior lighting system. Adherence to noise standards was a critical design consideration, with 85 dB(A) sound reduction at 1m, and 75 and 65 dB(A) configurations also available. Soundproofing panels are made of mineral wool with an M1-class fire rating covered by glass fibre and metal sheet. Rain barrier grilles fitted with an anti-volatile barrier protect air inlets and outlets from harsh weather conditions. A specialised primer coat and polyurethane finish enhance the durability of the enclosures. Kohler Power • 01865 863 858 www.kohlerpower.com

Q4 2021 www.datacentrereview.com 37


FINAL SAY

Keeping the edge customer-focused In a world where micro data centres need to be resilient, agile and often, hybrid, a lack of flexibility is causing disruption at the edge, says Andy Connor, Director – EMEA Channel at Subzero Engineering. or many years, the data centre industry has been engaged in a deep discussion on the concept of edge computing. Yet the definition varies from vendor to vendor and from customer to customer, creating not only mass confusion, but a fixed mindset in terms of solutions design. One might argue that through its lack of a true definition, the subjective nature of the edge has led the industry down an often singular path, where edge technologies have been designed to hypothetically meet the customers’ needs, but without the application in mind. IDC defines the edge as the multiform space between physical endpoints such as sensors and the ‘core’, or the physical infrastructure – the servers, storage and compute – within cloud locations and data centres. Yet within more traditional or conservative sectors, some customers are yet to truly understand how the edge relates to them, meaning the discussion needs to change, and fast.

F

Defining the edge When the trend of edge computing began to gain traction, the Infrastructure Masons were one of the first to try and define it. But even they recognised its largely subjective nature was beginning to cause market confusion, and stated that a widely accepted definition would become more essential as the industry began to confront the challenges that will arise at the edge. What’s clear is that the business case for edge technologies is becoming more prevalent, and according to Gartner, “by 2022, more than 50% of enterprise-generated data will be created and processed outside the data centre or cloud.” All this data invariably needs a home and depending on the type of data that is stored, whether it’s business or mission-critical, the design and location of the infrastructure will undoubtedly need to vary. One size fits all? Today in our industry, there’s a very real danger that, when it comes to the edge, many end-users will be sold infrastructure defined by the manufacturer and not based on the customer’s needs. And that’s because edge solutions are often found in one size, type or variable form factor. This creates a market whereby potential customers are persuaded that ‘one size fits all’, and that’s a far cry from the modular and agile approach that the industry has turned towards in recent years. The reality is that the edge has almost as many definitions as there are

38 www.datacentrereview.com Q4 2021

organisations trying to define it. And, while there are a range of well-defined and well-understood edge applications already in use such as micro data centres in retail locations, localised infrastructure providing low latency content delivery to avid viewers, there are many edge applications yet to be fully understood, defined or implemented. Many existing edge applications remain unpredictable in terms of their data centre and IT resources. And often local infrastructure is required to support the continued roll-out of a service looking to scale. In summary, most, if not all, organisations are faced with making frequent decisions about the best place to build, or access, edge infrastructure resources. And in today’s dynamic, digital world such decisions need to focus on the customer’s business requirements, providing them with a flexible, agile and optimised architecture that’s truly fit-for-purpose. Finding flexible solutions A standard-size container or micro data centre might be far too big for the business’ needs – but the assumption is that maybe the user will grow into it. And then there’s the question of customisation. What if the solution needs to be liquid-immersion cooling enabled for GPU-intensive computing at the edge? Not every micro data centre architecture can be built for that technology, and certainly not if the customer needs to scale quickly. There’s a question of cost. Micro data centres in standard form factors, or pre-integrated systems, often contain CAPEX-intensive server and storage technologies from manufacturers defined by the vendor. This, again, is a far cry from a solution that is defined to meet the business needs. In our industry, relationships are everything, and one must acknowledge that customers will want to specify power, cooling and IT infrastructure from their own choice of suppliers, and at a cost that meets their budgetary requirements. At Subzero Engineering, we believe customers need a solution that supports their business criterion, and one that helps them capitalise on the emerging opportunities of the edge. What’s more, we believe that containerised edge data centres, which are optimised for the application, built ready to scale and vendor-neutral for any type of infrastructure, are those that can truly meet the needs of the end-user. What’s clear is that with the advent of edge computing, the customer needs to define their edge. And as design and build consultants, our goal must be to support their needs with flexible, mission-critical solutions.


The invaluable resource for electrical professionals informing the industry for over 140 years Register now for your free subscription to the print and digital magazines, and our weekly enewsletter

SUBSCRIBE FOR FREE TODAY

www.electricalreview.co.uk/register