UPS/Standby Power Reducing the risk of battery failure
Are data centres ready for 5G?
The future of the cloud-based digital workplace
News 04 • Editor’s Comment Is it safe to come out now?
06 • News Latest news from the sector.
Features 12 • Cooling Could thermal insulation be the key to sustainable data centre cooling? Tom Merton, Technical Specialist at Armacell, tells us more.
16 • UPS/Standby Power Dave Sterlace, Head of Technology at ABB Global Data Centre Solutions, looks at how advancements in UPS technology can help maximise efficiencies in data centres.
24 • Telecoms/5G David Keegan, CEO of DataQube, explains how the IoT and edge computing are driving demand for 5G-ready data centres in the advent of all things ‘smart’.
26 • Cloud
Gary Bennion, Managing Director at CloudM, outlines the future of the cloud-based digital workplace, and the benefits of a multi-vendor approach.
Regulars 36 • Industry Insight How can data centres stand up to fuel supply shocks, asks Dom Puch, UK Data Centre Lead at Turner & Townsend.
40 • Products Innovations worth watching.
42 • Final Say Davide Villa, Business Development Director EMEAI at Western Digital, looks back at the history of the data centre and how data storage has shifted with the times.
Editor’s Comment Is it safe to come out now? Okay, so I don’t want to jinx it, but it’s looking like Omicron might be one foot out the door. So barring any new variants (just one of those inspirers of low-level dread keeping me up at night, alongside climate change, economic collapse, and old-age incontinence), let’s try and be positive about the year ahead. One thing the pandemic has done, apart from making us all agoraphobic alcoholics, is change the way we think about data – how and when we use it, how much we consume, and concern about how our rapacious demand for it is impacting the world around us. It’s given us cause for conversation, to cast a critical eye over how we do things, and get creative; innovation is the way we move forward as a sector. That’s why this issue takes a dive into facing the challenges ahead: the future of the humble UPS; data centre cooling and sustainability goals; and the challenges 2022 has in-store for the cloud. Innovation is also one of the things we want to highlight in our upcoming Electrical Review and Data Centre Review Excellence Awards, for which we are currently judging the entries – so keep your eyes peeled for updates on the event and how you can get involved. As always, you can drop me an email at kayleigh@ datacentrereview.com, and come on over and join us on Twitter @dcrmagazine. Kayleigh Hutchins, Editor
Kayleigh Hutchins firstname.lastname@example.org
Jordan O’Brien email@example.com
DESIGN & PRODUCTION
Alex Gold firstname.lastname@example.org
GROUP ACCOUNT DIRECTOR
Sunny Nehru +44 (0) 207 062 2539 email@example.com
Kelly Baker +44 (0)207 0622534 firstname.lastname@example.org
GROUP COMMERCIAL DIRECTOR
Fidi Neophytou +44 (0) 7741 911302
Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: email@example.com SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register Electrical Review is published by
2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media
Follow us on Twitter @DCRmagazine
Join us on LinkedIn
4 www.datacentrereview.com Q1 2022
SERVERCHOICE EXPANDS STEVENAGE DATA CENTRE
The latest highlights from all corners of the tech industry.
ServerChoice has opened a new data hall as demand increases at its Stevenage data centre. The Stevenage site is ServerChoice’s third location alongside its data centres in Harlow and Central London. The new hall will allow the facility to expand its operations and capacity. The hall will provide an extra 47 racks and utilise hot aisle containment to provide greater efficiency with no limit on rack loading. Each rack will receive 2x 32A (15 kW) of power, backed up by a dedicated UPS and generator. ProjectJet, a colocation connectivity platform which uses the latest Juniper fusion clusters, will provide connectivity for the new hall. Adam Bradshaw, Commercial Director at ServerChoice, said, “We are proud to be expanding our offerings at ServerChoice with a new data hall. We’ve seen demand increase for services from our Stevenage data centre, and this new expansion will help us continue to deliver for our customers.
BAI signs deal with BT to provide connectivity for TfL he multi-million-pound contract between BAI Communications and BT Wholesale will see delivery of 4G and 5G coverage on the London Underground. This follows an agreement last year in which BAI signed a 20-year concession with TfL to deliver full mobile and digital connectivity across the London Underground, within stations and on the Tube itself. This new deal means BT Wholesale will provide BAI with data centre space in London to rollout its ‘neutral host’ mobile network, which will provide 4G and 5G-ready mobile infrastructure for all mobile operators. Three and EE, BT Group’s mobile arm, have already signed up to use the network, and it will be available to use by all mobile operators. Coverage will begin on the Elizabeth Line in late summer 2022, with the aim to expand across the entire Underground by the end of 2024. Billy D’Arcy, CEO of BAI Communications UK, said, “This deal marks a significant step in our progress towards delivering high-speed mobile coverage across the London Underground network, with BT’s data centres playing an essential role in helping London leapfrog other major cities in terms of connectivity. BT Wholesale’s services will support our neutral host infrastructure in transforming the experience of customers with all UK mobile operators, providing seamless, 5G-ready coverage that will allow passengers to move around the capital more smartly, safely, and securely.”
6 www.datacentrereview.com Q1 2022
BT FORMS CLOUD PARTNERSHIP WITH RACKSPACE RAMBOLL ACQUIRES EYP MISSION CRITICAL FACILITIES Consulting engineering group Ramboll has announced its acquisition of data centre consultancy EYP Mission Critical Facilities (EYPMCF). The 50-person strong EYPMCF supports global clients in storing, securing and moving digital assets. Following this acquisition, Ramboll offers over 100 dedicated data centre experts across the globe and expects its revenue from data centre consulting services to reach USD $60 million in 2025. Ramboll hopes the move will strengthen its position on the international stage of data centre consultancies and contribute to the aim of reducing the carbon footprint of the industry.
BT has selected Rackspace Technology to develop its cloud services for customers. BT’s hybrid cloud services will be based on Rackspace Technology’s solutions, and will be deployed in BT data centres along with its Rackspace Fabric management layer. Rackspace Technology will integrate its cloud management expertise and automation, analytics, and AI tools with BT’s network and security capabilities. The companies have also committed to extend their partnership in the future to create new joint cloud solutions.
VERTIV JOINS RISE PARTNERSHIP PROGRAMME Vertiv has joined the Research Institutes of Sweden (RISE) Partnership Programme to help develop sustainable solutions for the data centre sector. Based in Luleå, Sweden, the statebacked research centre is supported by EU funds. It promotes industrial research and innovation, with the aim of advancing sustainability efforts. Vertiv’s key role will be in supporting the centre’s Infrastructure and Cloud research and test Environment (ICE), which provides access to industry studies and expertise as well as publications and demonstrations by RISE. RISE provides collaborating partners access to a large-scale test environment, complete with data centre modules, climate and heat boxes, wind tunnels, edge and liquid cooling testbeds for simulations and demonstrations for data collection and analysis. Vertiv will also support RISE heat recovery initiatives to develop the use of heat generated from data centres for various applications, like mealworm and vertical farming, biomass drying and district heating systems.
UK AI skills gap deepened by demand for bespoke solutions
esearch from Rackspace Technology suggests that the AI skills gap is being widened by demand for in-house, bespoke artificial
intelligence. The study, which surveyed UK IT leaders, indicated that the majority of organisations were looking to build AI solutions from scratch, with just 10% choosing off-the-shelf options. The result is a rising demand for AI skills, with 86% of respondents seeking to hire people with AI and Machine Learning (ML) expertise – the latter being the most difficult to recruit.
Over half of respondents had included AI and ML as part of their current business strategy, with 71% of companies reporting a benefit to their operations thanks to AI and ML. Those with already implemented AI and ML solutions reported a positive impact on revenue (74%), the ability to reduce expenses (69%), brand reputation (69%) and brand awareness (70%). Significantly, 87% of those questioned reported that they were dedicating more than 6% of their annual IT budget to AI and ML initiatives in 2022 – a 55% increase on those that committed that proportion of their budget in 2021.
GLOBAL TECHNICAL REALTY ANNOUNCES FIRST UK DATA CENTRE CAMPUS
Proximity opens new data centre in Birmingham Proximity Data Centres has unveiled a new edge data centre in Birmingham – Proximity Edge 8. The site will be situated close to major fibre networks, including BT, ITS, Lumen, Virgin and Zayo, as well as various regional fibre providers. The facility is accredited to The Uptime Institute Tier 3 specifications and has capacity for up to 2,000 racks in three separate data halls. It provides 6 MW of IT power with the potential to increase to 12 MW. “Located midway between London and Manchester, our new Birmingham edge colocation facility will serve as an ideal low latency communications hub for organisations such as hyperscalers, CDNs and gaming providers looking to bring data, services and content closer to users around the country – while also optimising data transit efficiencies and costs,” said John Hall, Managing Director, Colocation at Proximity Data Centres. Proximity states it expects to have 20 data centre sites across the UK available within the next 12 months.
8 www.datacentrereview.com Q1 2022
Global Technical Realty (GTR) has broken ground on its first UK data centre campus, GB One. GB One will be located at Slough Trading Estate, with estate owner SEGRO developing the building shell. Upon completion, GB One will be made up of three discrete data centres capable of operating individually or as one interlinked facility. Each building will provide 13.5 MW IT load. “The data centre sector is experiencing phenomenal growth and is evolving at a rate we’ve never seen before,” said Franek Sodzawiczny, Founder and CEO at Global Technical Realty. “GTR is in the unique position of having an equity capital commitment with KKR’s third global infrastructure fund. This gives us the flexibility to not only fund projects rapidly but to also offer continuity of support to our customers across multiple facilities and locations. We are motivated to get our first UK data centre up and operational and are delighted to be partnering with SEGRO to help achieve this.”
The invaluable resource for electrical professionals informing the industry for over 140 years Register now for your free subscription to the print and digital magazines, and our weekly enewsletter
SUBSCRIBE FOR FREE TODAY
Provisioning the digital transformation of Newcastle City Council services Newcastle City Council turns to Schneider Electric and its partner Advanced Power Technology for data centre resilience and system visibility.
10 www.datacentrereview.com Q1 2022
ewcastle City Council recently transformed its data centre operations, consolidating its main IT systems into a single data hall, with upgraded power and cooling infrastructure and new management software by Schneider Electric. In the process, it has improved resilience and uptime, simplified the management of all of its infrastructure equipment, and made part of its data centre available to other organisations, which helps to offset the costs of its operations. Setting the scene at Newcastle City Council Newcastle City Council employs over 5,000 people providing local-government services to citizens throughout the city. Its data centre hosts numerous applications, including those supporting council tax collection, social services, library services, education and road traffic management. It also has links with the IT systems of other essential public service bodies, such as the NHS and police. Given the vital nature of these services, the council’s IT systems must run reliably around the clock and any downtime will have a significant effect on the local populace. The challenge: tangled legacy issues The council’s IT systems had grown steadily over the years to support the evolution of its e-Government approach with the automation and digitisation of many of its activities. But the situation had evolved to the point where the data centre layout had become haphazard and disorganised, many infrastructure elements were nearing their end of life and in need of regular maintenance, and management of the infrastructure was labour intensive and time consuming. “We had three different server rooms with links between them,” said James Dickman, Senior ICT sSolutions analyst at Newcastle City Council. “Telecoms routers were in one room and servers in another, so it was difficult to manage them. We also had separate UPS systems in each room, and air handlers for cooling, many of which were old and in need of replacement. “Also, we had the inevitable ‘spaghetti effect’ of legacy systems with numerous cables installed under the floor over many years, now causing choke points and becoming very difficult to manage and maintain.”
As a public body we are always looking for cost and energy efficiencies. Schneider Electric and APT were able to design and deliver an overall data centre solution that meets our needs and our expectations. The new facility enables us to meet our service commitments to all stakeholders while minimising the carbon impact of delivering IT services The solution: standardisation and consolidation As part of a refurbishment of its Civic Centre, Newcastle City Council consolidated its data centre into a single room with a raised modular floor. Following a competitive tender, the council chose EcoStruxure for Data Centres, Schneider Electric’s IoT-enabled, open and interoperable system architecture for the new facility. The data centre was designed and built by Schneider elite partner, Advanced Power Technology (APT). The new integrated data centre infrastructure solution incorporates a variety of equipment from Schneider Electric, including APC Netshelter racks, Galaxy range UPS and PDUs, and monitoring and management software. 40 NetShelter IT racks are installed in three aisles with cold aisle containment to optimise cooling efficiency. For uninterruptible power, Newcastle City Council has standardised on the Galaxy range UPS, specifically the Symmetra PX 250 modular system. In N+1 redundant configuration, the new UPS solution enables Newcastle City Council to scale power protection and runtime as its business requirements evolve and change. Standardisation on the UPS has greatly improved the data centre’s ability to withstand power outages. “Previously, we were able to withstand a loss of power for about 20 minutes,” added Dickman. “Now we can operate for three hours on batteries, if needs be. We also have a backup generator, which we didn’t have before, to provide alternative power in the event of a lengthy loss of our mains supply.” Dickman continued, “Our resilience and uptime have been greatly improved. On one occasion recently, there was a power outage which affected many buildings close to the Civic Centre where the data centre is housed. But the UPS systems took over, the backup
generator came online when it was needed and 20 minutes later, the system rectified itself once power was restored. Nobody knew there had even been an issue until I checked the system logs the following morning.” A further benefit of the EcoStruxure for Data Centres solution is a more effective approach to data cable management. More structured cabling provides greater certainty about connectivity within the data centre, reducing complexity and the potential for human error, improving maintenance and serviceability with easier and safer access. The cable management solution also increases cooling efficiency by improving airflow in the cabinets, as well as providing improved scalability by simplifying moves, additions and changes in the space. EcoStruxure IT aiding Newcastle City Council to make the most of its data centre power The new data centre is managed using Schneider Electric’s next generation data centre infrastructure management (DCIM) software, EcoStruxure IT Expert. In addition, the technical environment is being monitored using an APC NetBotz appliance together with temperature and humidity sensors. The visibility this gives to the operation of the data centre is a marked improvement on the previous monitoring capability. Daniel Lynch commented, “We did have various monitoring systems in place before, but they were not integrated, and we still had to perform manual checks to make sure everything was functioning properly. Now there are sensors in each one of the racks allowing them to be monitored constantly. We also have CCTV in the data centre which we never had before, so that we can be alerted to any security issues.”
Q1 2022 www.datacentrereview.com 11
The case for insulation Could thermal insulation be the key to sustainable data centre cooling? Tom Merton, Technical Specialist at Armacell, tells us more.
12 www.datacentrereview.com Q1 2022
s organisations across the UK unite to help honour the pledges made by the Government at the COP26 United Nation Climate Change Conference to limit temperature increases above pre-industrial levels, data centre managers face the dual challenge of keeping their facilities cool while limiting energy consumption and their carbon footprint. A 2020 study published in the Journal of Science found that data centres account for 1% of total energy use worldwide; according to a 2021 Techerati report, they are also responsible for up to 2% of global CO2 emissions. Server rooms must be kept cool in order to provide optimal operating temperatures for the electrical equipment within, allowing it to function safely and reliably. If computers get too hot, they automatically switch off to avoid damage. The industry tends to work to the ASHRAE guideline
of an ambient temperature of 18-27°C. As the equipment itself generates a vast amount of heat – especially as new technologies feature higher density chips and rack densities increase – refrigeration and air conditioning systems are deployed to provide intensive data centre cooling around the clock. This year, steep hikes in energy prices make the issue of energy-efficiency particularly pressing, with the research firm Cornwall Insight predicting in October 2021 that UK energy bills could rise by as much as 30% in 2022 if the cost of gas and electricity continues to soar. Hot aisle to cool down A widely used energy-efficient means of data centre cooling is the use of hot aisle containment computer room air conditioning (CRAC) systems, which channel cold air to specific areas of the room, pulling cool air through the server racks and removing heat from the equipment before returning it to the CRAC unit via a predefined ‘hot aisle’ route. This is a more sustainable technique than the traditional method of data centre cooling, which involves the pressurisation of space below a raised floor and transmission of cold air through perforated tiles into the main data room space. But data centre managers can go one step further than deploying an energy-efficient hot aisle CRAC solution to improve power usage effectiveness (PUE) and meet stringent sustainability regulations, by taking care to insulate it properly, both to prevent heat gains as warm air escapes from the system back into the room and to guard against the condensation that can form on the surface of cool pipes. Insulation is the least expensive method of reducing CO2 emissions. Closed-cell insulation It is widely known that flexible elastomeric foam (FEF) is a more practical insulation than rigid materials such as phenolic, polyisocyanurate (PIR) and expanded polystyrene (XPS), which are difficult to fit securely to provide a fully sealed system. But when it comes to energy-efficiency, not all FEFs are equal and closed-cell insulation is the best. Closed-cell FEF insulation comprises millions of tightly packed, closed air filled cells, each bonded to those around it, creating an impenetrable barrier to both air and moisture, precluding the need for an external water vapour block. The rate of thermal ageing – the process by which materials lose a percentage of their thermal resistance over time – in closed-cell insulation is extremely slow when compared to open-cell alternatives; a low level of thermal conductivity is maintained, allowing the CRAC system to operate energy-efficiently long-term. In recent Armacell research conducted by the Fraunhofer Institute for Building Physics in Stuttgart, it was found that over a period of 10 years, the thermal conductivity of our closed-cell FEF insulation, AF/ArmaFlex Class 0, rose by only around 15%; over the same period, that of opencell mineral wool rose by 77% and polyurethane (PUR) by 150%. As a result of the poor performance of mineral wool in particular, its use on refrigeration pipes is restricted in some European countries, including Germany and Belgium. An accessible alternative Closed-cell FEF also prevents condensation forming on the pipework that carries cooling liquid or refrigerant. Condensation is the enemy of
insulation for several reasons, not least because when it’s damp, it does not insulate properly and the CRAC system needs to work overtime to maintain a cool ambient temperature. It can also corrode the insulated pipes to the point where they fail to function safely and efficiently and require replacement, and it goes without saying that data centre equipment must not come into contact with dripping water. Open-cell insulations are more prone to condensation as their structure allows air to reach the surface of cooler pipes, so they require the application of external water vapour barriers. A further advantage of closed-cell insulation is its hygiene, and this is vital as data centres are ‘clean room’ environments where insulation must not release dust or fibres that could interfere with the performance of the equipment.
Closed-cell FEF insulation comprises millions of tightly packed, closed air filled cells, each bonded to those around it, creating an impenetrable barrier to both air and moisture There are many unique data centres around the world that utilise ingenious sustainable methods of maintaining a constant temperature, ranging from those based under water, in nuclear bunkers, in 19th century cathedrals and in the Arctic Circle. These options are not available to the majority of organisations, but the judicious insulation of CRAC systems with closed-cell material provides an accessible and cost-effective alternative. The selection of products such as pipe supports featuring recycled material, requiring less energy to manufacture and generating less CO2 as compared to conventional structural materials, and can further support the goal of sustainable data centres.
Q1 2022 www.datacentrereview.com 13
Changing the metrics
14 www.datacentrereview.com Q1 2022
Larry Kosch, Director of Product Marketing at Green Revolution Cooling, explores how immersion cooling can impact data centre metrics, and why we need to rethink how we measure efficiency.
ower Usage Effectiveness (PUE) is well understood – it’s been a useful metric for determining energy efficiency since it was created by The Green Grid (TGG) in 2007 and adopted by the ISO as a KPI for data centres. The manner in which compute is cooled is an enormous contributor to PUE; and, since the dominant cooling technology for data centres has been chilled air, this measurement assumes air cooling as part of the equation. With sustainability and carbon footprint considerations driving modern data centre design decisions, liquid immersion cooling has emerged as the pre-eminent way to conserve energy while eliminating heat. Yet PUE is based on the old norms of data centre architecture and design, and isn’t the best metric moving forward. Data centre owners and operators use PUE to calculate energy use in liquid cooled facilities because it is relatable and familiar. As more data centres migrate to liquid cooling to improve efficiencies and move toward more sustainable operations, it is time to rethink how we measure performance. We know that moving from chilled air to immersion is a fast way to gain PUE efficiency. But does it provide a useful measurement for comparing liquid cooled data centres to each other? In fact, we need other metrics to get a more granular and more accurate picture of how all cooling technologies impact power use and to enable better decisions. This is a transformational moment for data centre design, efficiency and sustainability. PUE is the classic case of trying to fit a square peg into a round hole. And it is time to think differently about how we measure efficiency. There is radical change in how data centre operators reduce the temperature of their valuable computing equipment, to feasibly enable them to balance the competing needs of sustainability, regulatory compliance, costs, capacity issues, and computing power. Liquid immersion cooling is necessary for this type of change, and the old tools no longer apply.
Beyond PUE With so many new technologies changing the essence of the data centre, it is crucial to find the most useful metrics to enable data centre owners and operators to make decisions. PUE is a blunt tool, measuring overall power use but not looking at the impact of individual contributors nor weighting the value of the type of cooling solution – chilled air versus liquid. Today, data centres are necessarily focused beyond simple energy efficiency. To remain competitive, data centres are also concentrating on decarbonisation and the use of renewable energy sources. So, if PUE is obsolete, what are the popular metrics that are lining up to replace it?
Carbon Usage Effectiveness CUE is the ratio of the total CO2 emissions caused by total data centre energy consumption to the energy consumption of IT equipment. It was developed by TGG to measure data centre sustainability in terms of carbon emissions. Liquid immersion cooling reduces energy consumption by eliminating moving components, such as fans. Cooling Capacity Factor CCF is the ratio of total running rated cooling capacity to 110% of the critical load. It was created by Upsite Technologies to benchmark rated cooling capacity versus utilisation; to determine and understand current inefficiencies in your cooling infrastructure, as well as identifying opportunities for improvement. Energy Reuse Effectiveness/Energy Reuse Factor Energy reuse effectiveness (ERE) measures the overall energy efficiency if energy is being reused outside the data centre. Energy reuse factor (ERF) measures how much energy in the data centre is reused elsewhere. These metrics, which are two sides of the same coin, were developed by the TGG to capture use of excess energy, which cannot properly be measured by PUE. Including use of waste heat in a PUE measurement could return a PUE of less than one, making it difficult to effectively compare metrics across data centres.
This is a transformational moment for data centre design, as efficiency and sustainability grow in importance This is a transformational moment for data centre design, as efficiency and sustainability grow in importance. Properly measuring data centre efficiency, and doing so in a way to enable an effective comparison of data centre cooling technologies, is a complex problem. PUE was a foundational metric which enabled data centres to build awareness of their own energy usage. But it is a blunt tool and one that has outlived its usefulness – it is inclusive of the entire data centre and cannot allow fair and direct comparison of the contributions of individual elements of the data centre infrastructure. Other trending metrics Water usage effectiveness (WUE) This measures how efficiently water is being used in the data centre. This is important for more than just sustainability; water availability determines if, and how, IT gear can be cooled in harsh environments (such as an arid region). Data Centre Performance Per Energy (DPPE) This measures the energy efficiency of the entire data centre, including both IT equipment and infrastructure. It is a combination of four other metrics: Data Centre Infrastructure Efficiency (DCiE), Green Energy Coefficient (GEC), IT Equipment Energy (ITEE), and IT Equipment Utilisation (ITEU).
Q1 2022 www.datacentrereview.com 15
The only way is UPS Dave Sterlace, Head of Technology at ABB Global Data Centre Solutions, looks at how advancements in UPS technology and applications can help mitigate energy usage and maximise efficiencies in data centres to help the sector grow in a green way.
16 www.datacentrereview.com Q1 2022
ccording to the latest statistics from the International Energy Agency, data centres and data transmission networks each account for around 1% of the world’s electricity demand – that equates to an estimated 200 terawatt hours (TWh) of power every year. In fact, despite more servers being added annually, data centres have managed to keep their collective power demand consistently around 2% since 2005, but how long this can be sustained remains to be seen. In 2020, the world generated 40 ZB of data and this is estimated to grow to 180 ZB by 2025. In the face of such exponential growth, operators need to find new and innovative ways to expand their facilities without raising carbon emissions or taking large amounts of energy out of the grid. One way to do this is through diligent UPS specification and operation. While UPS provides essential protection against power failures,
voltage regulation, power factor correction and harmonics, it must be clean and energy efficient to help data centre operators manage their daily energy use and emissions – and even more so as they rush to add extra capacity.
Both LV and MV UPS technology is advancing all the time, and we are now seeing new models hitting the market which increase energy efficiency significantly Choosing cleaner, greener technology With modern technology, there are now better options for power protection in medium voltage (MV) applications (compared to rotary UPS). New MV UPS solutions provide a valid alternative for low voltage (LV) UPS in applications where centralised critical power systems are preferred. This was not the case in the past, where the benefits of modern static UPS were superior to MV-side alternatives. Now, data centre builders can choose the system that best meets their needs. As data centres continue to grow in terms of size, configuring UPS protection at the MV level offers an energy-efficient and environmentally friendly alternative and it will gain more and more popularity. Both LV and MV UPS technology is advancing all the time, and we are now seeing new models hitting the market which increase energy efficiency significantly – this year, we’ve seen an MV UPS solution come to market that offers 98% efficiency, which would have been unheard of a few years ago. One reason that higher efficiencies are possible is that the design of UPS systems is moving away from traditional diesel rotary systems and adopting more innovative static converter architecture. When comparing the two types of UPS, a static converter-based system can deliver substantial energy savings when compared to a rotary system during its lifespan. Another consideration should be your UPS’ back-up power systems. The most common back-ups are diesel and slow-paced gas generators – which both add to a data centre’s carbon emissions – and turbines. In the future, back-up power could be provided by alternative, low or zero carbon energy sources such as hydrogen. Although we are a little way off the infrastructure to make that a reality, it is worth considering the specification of UPS which can be adapted to work with alternative energies in the future. Furthermore, LV and MV static UPSs can protect the load against grid events, providing clean and reliable power in accordance with IEC620403 class 1 and ITIC requirements. Giving back to the grid Data centres have large static demands, which can put pressure on national power grid operations. But with some advanced UPS systems,
data centres can be used to give back to the grid, rather than just taking power out of it. As electricity grids transform to use more renewable energy resources, they need to find new ways to balance and stabilise the system to meet peaks and troughs in demand. These balancing services used to be provided by coal and gas fired power stations, but unfortunately electricity system operators can’t switch wind and solar generation on and off when they need to! So other ways to balance the grid are needed. UPS can be fitted with Frequency Regulation Functionality (FRF) which allows the power grid to tap into a data centre’s unused reserves of power to respond to varying load demands and to maintain the grid’s frequency levels. Severe frequency deviations can cause power blackouts, so by helping the grid operator, a data centre also help resolve issues which would affect their own operations. The way it works is simple. Normally, energy flows from the grid to the load and the battery, to keep it charged. If there is a grid issue (for example if the grid is under pressure to deliver more electricity during a peak period), energy for the load is taken from the data centre’s UPS battery. This support can also go the other way. If there is an increase of grid frequency and the grid operator needs to offload some power, it can be discharged to the UPS battery banks. As well as helping to balance the local power grid, adding FRF to UPS installations helps balance the books. Grid operators offer financial compensation for frequency balancing services, so data centres can effectively use their UPS equipment as a revenue generation asset, turning previously depreciating capital into a new revenue stream, while meeting corporate sustainability targets.
UPS can be fitted with Frequency Regulation Functionality (FRF) which allows the power grid to tap into a data centre’s unused reserves of power to respond to varying load demands and to maintain the grid’s frequency levels The future is green With careful specification and diligent operation, UPS can deliver reliable, clean power for mission critical infrastructure, giving data centres a way of reducing their energy usage and carbon emissions without compromising on the integrity of their operations. By supporting the shift to renewable energy generation and helping to reduce CO2 emissions on a national level, incorporating FRF can have a positive environmental impact. It introduces data centres to an effortless way to contribute to a greener world while putting the energy supplier and consumer ahead of the game when it comes to new emissions standards.
Q1 2022 www.datacentrereview.com 17
Getting the best from your batteries Chris Cutler from Riello UPS takes a deep dive into UPS batteries and outlines the steps data centre operators should take to reduce the risk of premature failure.
ata centre operators across the country depend on uninterruptible power supplies (UPS) to provide the ultimate insurance that if there’s any issue with the mains electricity, their vital servers and other critical equipment will keep on running. But without a well-maintained, quality battery system that provides the all-important emergency power when needed, the UPS itself is practically useless. As many as 80% of UPS failures can be linked to issues with the batteries, while batteries also constitute a large proportion of the total cost of a UPS, representing a considerable investment for operators.
18 www.datacentrereview.com Q1 2022
That’s why good upkeep is critical, as letting the battery system fall into a poor state heightens the risk to both your critical load and business continuity. The basics of UPS batteries While a growing number of data centres are exploring alternative solutions such as lithium-ion, the majority of UPS systems today use sealed or valve-regulated lead-acid batteries, often shortened to SLA or VRLA. These cells are big and heavy, due to their low energy-to-weight and energy-to-volume ratios. But they do deliver high surge currents, making them perfect for providing instant backup during a temporary mains failure or to power up a generator. Most of the VRLA batteries used in data centres will have a 10-year design life, which suggests that the battery will last for 10 years. But there’s one important caveat. That assumption is based on ‘perfect’ operating conditions. Of course, there’s no such thing as a perfect operating environment. There are simply too many factors that can influence a battery’s lifespan, and over time, performance will inevitably reduce. Guidelines issued by EUROBAT, the Association of European Automotive and Industrial Battery Manufacturers, state that a battery reaches its
end of service life when capacity drops below 80% of its original value. Even before allowing for any external influences that affect lifespan, the operational capacity of a 10-year design life battery will drop below 100% at year six. Capacity will continue to fall to around 80% over the remaining four years, while the autonomy of the UPS will reduce too. That’s why it’s become commonplace to proactively replace 10-year design batteries in service year seven or eight. It takes into account all the factors that reduce lifespan, while leaving enough of a safety margin to mitigate any drop-off in performance. What reduces a battery’s lifespan? Several factors impact the length of UPS battery lifespan, such as the frequency and depth of discharge. Every time you discharge a battery, it slightly reduces its capacity, although a partial discharge does have less impact than fully draining the cells. Operating voltage is another issue. Overcharging above the manufacturer’s recommended guidelines produces excessive hydrogen and oxygen, which will dry the batteries out over time. Conversely, undercharging can lead to sulphate crystals forming on the plates and within the electrolyte. Known as sulphation, this condition is common with stop-start battery applications like a UPS. It increases internal resistance and results in a longer charging cycle. There’s ripple current, where the AC ripple generated by the UPS’ rectifier, charger, or inverter can cause overheating, which speeds up the rate of deterioration. Poor alignment of the separators and plates during the initial design and installation process can cause something called top mossing, where a crystalline moss forms and can see the cell start self-discharging. But by far the most common cause of premature battery failure is high ambient temperature – the higher the temperature, the quicker the chemical reaction, which increases water loss and corrosion. A VRLA battery has a rated capacity based on an optimum operating temperature of 20-25oC. For every 10oC temperature increase, it’s generally accepted the service life will halve. Preventing premature battery failure The good news is that data centre operators can take several steps to maximise the service life of their UPS batteries. It all boils down to proactive maintenance, monitoring, and testing. At a bare minimum, each battery should be manually checked at least once a year. Such basic physical checks inspect the terminals for any signs of corrosion, leaks, cracks, or swelling. It’s also a chance to tighten any inter-cell connections. It should be noted that most of the modern UPS systems deployed in data centres will incorporate their own sophisticated battery care systems, which monitor and record measurements like the number of cycles or float voltages. They will also automatically run regular tests, protect against slow discharges and ripple currents, and provide a range of recharge options. Naturally, data centres should invest in more advanced forms of battery testing too. One option is impedance testing, a non-intrusive way to build up a history of each individual cell. It involves applying an AC current via probes attached to the terminals, and if you carry it out annually, it’s relatively easy to compare the results and spot any worrying signs of deterioration.
Impedance testing doesn’t require the batteries to be taken offline – but it only provides a broad indication of their condition, so isn’t a perfect solution by any stretch. An alternative is discharge testing, which is often referred to as load bank testing. This interrogates your batteries at normal and peak loads, showing which cells hold their charge and which are nearing the end of their service life. However, load bank testing does take the batteries out of service. While typically this tends to be for just a few hours, it can stretch to several days in the worst-case scenario. If you’re looking for a middle way between these two approaches, you’ve got partial discharge testing. The batteries are discharged by 80%, so you still have 20% capacity to call upon if the mains supply is interrupted. Of course, another way of maximising UPS battery lifespan is ensuring they operate within their optimum temperature range of 20-25oC, either with sufficient air conditioning in the IT room or by housing them in their own standalone battery room. You should install UPS batteries in a well-ventilated area free of dust or moisture, ideally away from direct sunlight. It’s advised to leave at least a 10mm gap between battery blocks to ensure adequate ventilation. Doing so takes into account that the battery casing will expand slightly as and when it gets warmer. This gap also allows heat to dissipate, which reduces the risk of overheating and subsequent likelihood of thermal runaway. Finally, it’s worth noting that even unused UPS batteries will automatically discharge small amounts of energy. So if you plan on storing batteries for a prolonged period before using them, keep them at a maximum temperature of 10oC and top up charge them every few months. Proper preparation is key With so many variables that can influence the rate of battery deterioration, it’s clear data centre operators need to have robust monitoring, maintenance, and testing regimes in place. It also highlights the importance of following industry-accepted best practice, such as proactively replacing 10-year design life batteries in year seven or eight. Failure to do so is an unacceptable gamble, particularly in a mission-critical environment like a data centre, where even a short period of unplanned downtime comes at a huge cost.
Q1 2022 www.datacentrereview.com 19
Meeting the climate change challenge VYCON’s John Jeter explores how data centres can reduce their carbon footprint with the use of flywheel technology.
s we reflect on the many goals of the recent UN Climate Change Conference (COP26), two key principles were of special note:
• Encourage more sustainable behaviour. • Promote the use of responsible sources and responsible use of resources throughout the supply chain. These values are especially appropriate for the data centre industry. Never in our history have we experienced the unprecedented rapid growth of the digital infrastructure. The global Covid-19 pandemic has certainly accelerated technologies to meet customer demand for all things digital. Large companies are building and expanding their data centre footprints to meet the world’s insatiable digital appetite. To illustrate this point, in April of 2021, Microsoft announced it would build 50 to 100 new data centres annually across the globe. And IDC estimates that by 2025, there will be 175 zettabytes of data created each year, resulting in data centres consuming up to 5% of the world’s electricity.
20 www.datacentrereview.com Q1 2022
This growth also means that 24X7 always-on operation has never been as crucial as it is now. Protecting data centre operations against inevitable power disturbances and outages is a critical piece of the data centre infrastructure. The Department of Energy (DoE) has estimated that by 2018 outages cost the US economy $150 billion a year. UPSs and energy storage While there has been substantial adoption of alternative energy sources such as solar, wind and geothermal to power large data centres, more can be done to reduce an operations’ carbon footprint. Manufacturers of three-phase Uninterruptible Power Supply Systems (UPSs) for protecting data centres have adopted newer technology to improve efficiency and scalability. However, the energy storage component of the UPS is still dependent on strings of batteries – some of which, like lead-acid batteries – are extremely toxic and hostile to the planet. Moreover, lead-acid batteries, which are still the preferred energy storage choice due to cost, are unreliable, require expensive cooling, frequent maintenance and replacement, and need a large amount of real estate. And don’t forget environmental mitigation and spill containment. Relatively new to the scene is the integration of Lithium-Ion (Liion) batteries with UPSs. Li-ion batteries have distinct advantages over lead-acid types, like the ability to operate in wider temperature ranges with thousands of charge-discharge cycles over their lifespan. However, according to The National Fire Protection Association (NFPA) 855 standard in the US, Li-ion batteries must include an approved battery management system with thermal runaway management. In addition, installations must maintain three feet of clearance all around the battery cabinets to ensure that fire will not spread from cabinet to cabinet unless they are UL9450A tested, and the Authorised Housing Jurisdiction (AHJ) waives cabinet spacing. Clean energy flywheels to the rescue A more sustainable approach to energy storage involves incorporating 40kW to megawatt-sized flywheels into power protection configurations instead of heavy, toxic batteries with a large carbon footprint. Configuring a 160kVA UPS with a flywheel clean energy storage system instead of lead-acid batteries can save six metric tons of CO2 over a 20-year life with four battery replacements. Besides the great environmental benefits, there are also substantial cost savings in adopting flywheel modules with UPSs. Since the flywheel does not use a chemical reaction to produce power, it can be deployed in operating temperature environments of up to 40oC. The ability to operate in a wide temperature range saves valuable computer room floor space since the UPS doesn’t have to be in a precise temperature-controlled room, thus reducing the cost of HVAC to cool the system. And the compact footprint of the flywheel is another benefit as one flywheel module only takes up 30in depth by 30in width of floor space. Flywheel technology Because the flywheel operates as a mechanical battery, it holds kinetic energy in the form of a rotating mass. It then converts this energy to electric power within the flywheel system. A high-speed motor generator, active magnetic bearings that levitate and sustain the rotor during
operation, and an on-board control system provide vital information on system performance. The monitoring system lets users know the exact state of the system in real time – unlike batteries. There are always questions about the batteries’ state of health. Even testing them under load degrades their useful life. Compatible with all major global brands of three-phase UPSs, the flywheel interfaces with the DC bus of the UPS, just like a bank of batteries, receiving charging current from the UPS and providing DC current to the UPS inverter during discharge. When there is a power outage, the flywheel will provide 14 to 45 seconds of backup time to transfer to the on-site generator. This allows plenty of time for the generator to start up, as an average backup generator requires less than 10 seconds to come online.
Lead-acid batteries, which are still the preferred energy storage choice due to cost, are unreliable, require expensive cooling, frequent maintenance and replacement, and need a large amount of real estate For installations where the flywheel is used in conjunction with batteries (either with or without a genset), the flywheel systems will engage first during transients and short power disruptions, preserving the batteries for use in longer-term outages and minimising discharge cycles to prolong overall battery life. Bottom line Flywheel energy storage replaces the weak links associated with battery-based backup with a reliable, energy-efficient, instantaneous energy source. Because of maintenance and replacement costs, cooling and space requirements, the traditional operating cost of batteries over a 15-year period is three to four times more than that of a flywheel energy storage solution. Users are realising $100,000 to $200,000 USD in savings over using a five-minute lead-acid battery configuration. The flywheel solution is the most reliable and cost-effective bridge to the engine genset during a utility line failure. As the demands of processing data rapidly increase and become more complex, data centre managers, engineers, and consultants continually assess power solutions that can improve efficiencies while enhancing energy reliability. While system availability is always the first requirement, being environmentally friendly and lowering carbon footprint are now musthaves. By augmenting battery strings or replacing them with flywheels, managers can take one more step in reducing their carbon footprint and lowering the cost of ownership.
Q1 2022 www.datacentrereview.com 21
Supporting the growth of the UK’s leading independent IT provider PPSPower details how it provided cloud, IT and telecommunications service supplier Daisy Group with reliable and cost-effective back-up power solutions.
22 www.datacentrereview.com Q1 2022
be programmed differently to provide a better visual indication on the GUI or any system changes, which would improve overall design. Work is due to be completed in March.
PSPower (PPS) is a respected national provider of back-up generator and UPS (uninterruptible power supplies) installation, maintenance and repair solutions. Companies need to be able to put their complete trust in a back-up power provider as a drop in power can be catastrophic in any industry, not least the IT and communications sector. Daisy Group is a national provider of cloud, IT and telecommunications services, including internet hosting, broadband internet connections and VOIP. They first approached PPS in 2021 with the aim of finding a back-up power provider that could be trusted for all situations. PPSPower took a partnership approach to working that opened a two-way dialogue with Daisy Corporate Services (DCS), with complete visibility of all aspects of the project. This led to a successful conclusion, with all aspects of work meeting or exceeding DCS expectations. As a result, PPS has been awarded a large national contract covering multiple back-up power requirements at sites including (but not limited to) Birstall (West Yorkshire), Romford (East London), Reading (Berkshire) and Wapping (Middlesex).
PPS and Daisy Group – working together in 2022 Most of the sites within DCS have multiple generators and require a team of either one or
two engineers from PPS, working over several days to complete a variety of work: Daisy Birstall site Part one of this project will see two engineers remove an old radiator and fit a new unit. Before this can be done, some of the canopy containing the generator must be dismantled.
Daisy Reading site Two engineers will attend the site over 10 days to install two 4000A ACBs (Air Circuit Breakers) load bank connection panels. Both ACBs will be delivered to site, off-loaded and positioned adjacent to the generator output breakers. Prior to installation, engineers from PPS will manufacture a two-piece aluminium alternator cover to facilitate the cables. Following installation, a full functionality test of the system will be performed. The Load Bank ACBs will be electrically interlocked to the phase failure system for each dedicated generator, so that in the event of a power outage during a load bank test, the load bank ACB will disconnect the load bank and transfer availability of the generator back to the building. Derek Parkin, from DCS, commented on the partnership with PPS, “As our service is built around 24/7 connectivity, it was imperative that we found a provider of backup power to ensure we always have a reliable supply for all scenarios.
As a result, PPS has been awarded a large national contract covering multiple back-up power requirements
PPS will also remove existing generator controllers, which have been discontinued, and replace them with up-to-date units. Work will include the replacement of necessary components such that the system operates as the current installation. During commissioning, the controllers can be programmed differently to provide a better visual indication on the GUI or any system changes, which would improve overall design. Daisy Romford site PPS will be removing existing controllers and replacing them with up-to-date units. Due to the number of outputs fitted to the current system, a further output expansion unit is to be provided. As with the works at the Birstall site, during commissioning, the controllers can
“PPS goes far beyond just delivering to specification. Their customer service is second to none and they are always available to answer our calls and provide the expert advice we need. I am sure we will work together long into the future.” Stephen Peal, Managing Director of PPS, echoed Derek’s sentiment, “Our aim is always to create a partnership with our clients and become an extension of their business. That is what has happened with Daisy Group, and I’m delighted with the trust they place in us. “We strive to make recommendations that we feel will benefit the customer, whether that is through cost-savings or improved output. Ultimately, our work makes our clients’ businesses operate more efficiently and cost-effectively.”
Q1 2022 www.datacentrereview.com 23
The 5G paradox David Keegan, CEO of DataQube, explains how the IoT and edge computing are driving demand for 5G-ready data centres in the advent of all things ‘smart’ and the impact this is having on the classic data centre business model.
24 www.datacentrereview.com Q1 2022
he Internet of Things and the ‘smart’ phenomenon are redefining the data centre landscape. Machines are churning out data in such high volumes it’s almost inconceivable, with said data needed to be dealt with at the source if the embedded tech and interconnected devices/applications reliant on the gathered information are to be workable in the real world. To give this some perspective, a single driverless car (according to Intel) could potentially generate up to 5TB every hour. Factor this in with the cameras and sensors needed to make them road safe, not to mention the associated roadside infrastructure and comms networks needed for operability and safety purposes, this figure could easily be an underestimation. This intensified data generation is then combined with 5G, poised to be the catalyst to all things smart, not to mention innovations in AI and machine learning. This fifth-generation network, because of its ultrahigh-speed, ultra-low-latency capabilities, is driving change across the industry, with data centre infrastructures in their existing format need-
ing a total rethink in terms of their connectivity capabilities and physical locations if they are to keep pace with this data explosion. The race is on to move to the edge The drive for data centres to re-evaluate their data handling capabilities, which has been accelerating for some time, has been given a turbocharge by Covid-19. Such is the prediction for change that Gartner estimates that 75% of enterprise-generated data will be created and processed outside centralised facilities by 2025. The global market for edge data centres is also expected to nearly triple to $13.5 billion by 2024 from $4 billion in 2017, thanks to the potential for these smaller, locally located facilities.
Combining 5G and edge computing delivers unprecedented accessibility to large pools of data by drastically reducing latency and streamlining service delivery at the edge In tandem, investment in 5G infrastructure is also on the up, with the GSMA predicting that operators will allocate more than 80% of their CAPEX towards building 5G networks within the same timeframe. 5G and edge computing are clearly transforming the way we harness and use data but considering they fulfil very different requirements in the overall ‘data processing ecosystem’, the rapid growth of both poses the question – what would happen if they were to act as one? Firstly this ‘marriage’ would deliver an augmented end-user experience and enhanced interconnectivity between devices. Secondly, and far more significant, combining 5G and edge computing delivers unprecedented accessibility to large pools of data by drastically reducing latency and streamlining service delivery at the edge. Both these capabilities are instrumental to the widescale deployment of the IoT and automation. The theory is great; the reality is somewhat different because edge data centre facilities up to the job are in short supply and the rollout of 5G is not happening as quickly as everyone might have hoped. 5G may well promise to be the holy grail to all things IoT, but edge computing and IoT are already in widespread use. They have been for some time and are working perfectly well on existing 4G networks, so why all the hype? 5G has been built for machines To fully understand the true impact 5G is set to have on digitisation, you need to understand the key difference between this next-generation network and its predecessors. Until now, all mobile networks have been designed to meet the needs of people. 5G has been designed with machines in mind. The latency rate for existing 4G networks is 200 milliseconds, not far off the 250 milliseconds it takes for humans to react to visual stimuli. The 5G latency rate is significantly lower, at just 1 millisecond. 5G also
promises to reach delivery speeds of up to 10 Gbps. These differentiators may be of little consequence to commercial telecoms services, but this high-speed/low-latency performance will allow machines to achieve near-seamless communication. The technology/usability paradox While 5G delivers extremely high data rates, the downside is that this new network is shaking up the telco and data centre industries in their entirety. 5G frequency bands have much shorter propagation rates than their 4G counterparts. This in turn requires significantly more telco equipment above ground to overcome line-of-sight propagation challenges and fibre cabling below ground to enable seamless connectivity to the cloud. Before aspirations of an interconnected world can be achieved, existing telco sites need to incorporate cloud computing into their existing infrastructures to facilitate interconnectivity between IoT devices, other edge data centres and centralised facilities, as well as to manage the associated backhaul. And the CAPEX needed to make this happen is proving to be a major stumbling block for the mobile network operators. Their 3G and 4G spectrum investments did not reap the rewards they expected, and as such, justifying the upfront sums needed to the different stakeholders is challenging. The funds needed are so significant that the GSMA does not predict 5G to be universal until at least 2028. Existing edge data centre setups are neither viable nor affordable These connectivity barriers are a major quandary for regular data centres. Not only must they find a viable means of decentralising their data processing without compromising performance, security, or data fidelity, they also have the interconnectivity aspect to contend with.
While 5G delivers extremely high data rates, the downside is that this new network is shaking up the telco and data centre industries in their entirety The only options currently available are containerised data centres, purpose-built data centres or micro data centres, all of which involve deployment times of 18 months plus and require huge upfront investment. More significantly, their install sites are significantly limited because these types of facilities are totally unsuitable for tall buildings, underground locations, at the side of motorways, in railway sidings etc. An alternative approach is needed to data processing and edge computing, and unless the major players can find cost-effective and viable means of delivering HPC at the edge within acceptable timeframes, government aspirations for the ‘smart’ revolution can be little more than a pipedream.
Q1 2022 www.datacentrereview.com 25
A multi-cloud approach Gary Bennion, Managing Director at CloudM, outlines the future of the cloud-based digital workplace, and the benefits of a multivendor approach.
hroughout history, times of change and disruption have brought with them innovation. Take the pre-war industrial advancements that fuelled the First Connective Revolution: telegraph lines, railways, electricity, and telephones. Collectively, they made the world a little smaller. The Second Connective Revolution came in the 50s, with mainframe computing and the internet. International flights were also more affordable and comfortable. You could now speak to, or even visit, most of the world. Now, I believe we’re on the cusp of a Third Connective Revolution. Some of it will be health-focused, to curb the spread or return of Covid-19, and to generally adopt a more sanitised lifestyle – but there are also environmental factors that will be considered as businesses become more climate conscious and strive to reduce their carbon footprint. Something all of us need to be thinking and acting on. The final change, and the one I want to focus on however, is around people and technology. With more and more businesses embracing digital collaboration, companies can finally acknowledge that if the talent is right for the job, the location doesn’t matter; it looks like hybrid working is here to stay. To bring true hybrid working to reality, adopting a cloud-based digital workplace is going to be essential to enable employees to connect with the business, access files, and streamline processes, regardless of where they’re located. Facilitating employees to work in this way only increases productivity, by eliminating the time-drain of commuting and allowing for more time to actually get on with things – and makes sure you’ve got the right person for the job. Multi-cloud, where businesses are run across a network of two or more cloud vendors, is also coming to the forefront as companies adjust to this new way of working. Cloud challenges for multinationals The challenges for multinational organisations arise with the geopolitics around employee data sovereignty, and, in turn, cloud usage. This is just one of the occasions when multi-cloud platforms can be particularly useful.
26 www.datacentrereview.com Q1 2022
Take the differences in data laws between the East and West, for example. On one side you have most of Europe and the US, who will happily allow citizens to store their data within Google, Microsoft, or anywhere else they decide. However, in some countries, like China, Russia, or Nigeria, they have specific requirements that state that the data must be stored on servers within the country itself, creating a huge headache for corporations to overcome as they expand into new territories or take on new employees. Similarly, some territories just prefer, or trust, one cloud over the other, so the Japanese head office might be solely on Google, whereas the German branches are on Microsoft. This might seem inconvenient, as in all honesty, there are benefits to being on the same platform suite as your colleagues, including improved communication and collaboration, but there are also many benefits of a multi-cloud solution. Productivity suites, such as Google Workspace and Microsoft 365, have recognised that companies and even individuals utilise certain features within their products to suit their needs, resulting in a well-integrated platform. Fewer restrictions improves collaboration and productivity. Therefore, more and more businesses are seeing the benefits of the multi-cloud approach.
The challenges for multinational organisations arise with the geopolitics around employee data sovereignty, and, in turn, cloud usage Mergers and acquisitions The past couple of years have been hard for some businesses, with some sectors being hit much harder than others. Whilst this may seem to imply that bigger businesses would be more frugal with their spending, it also means that there are opportunities to acquire or merge with other companies in their sector, strengthening their position in the long run. When mergers and acquisitions take place, there can be a huge surge to get everyone using the same processes and standards. One of these steps can be moving all users to the same platform. Migrating to one platform is often the long-term goal following a merger, as it will ultimately increase efficiency for companies and their IT departments. A successful migration will enable employees to benefit from increased collaboration, while safely storing data in one place and assisting with internal communication, file sharing, open documents and changing email signatures will all be streamlined into a much simpler process for employees. Data migration has a 99.9% success rate, so businesses can trust the
fact that their sensitive data will be migrated safely and securely. Further benefits include keeping control of licence costs, automatic onboarding and offboarding and an overall better control over a growing workforce. However, in the short-term, a multi-cloud solution is not necessarily a bad thing. Allowing employees to remain on their current platform during times of significant change can help to streamline the merging process, as people are not required to get to grips with a new platform while also potentially onboarding new business practices. It’s wise for companies to carefully consider how they will eventually manage the migration process, and work with a reputable provider, rather than rushing into anything. Interoperability between Google and Microsoft You may think that choosing between Google or Microsoft is an either/ or choice because they just aren’t compatible. We have all had issues converting a Word Doc to a Google Doc, or other format. But that simply is not true anymore. Both Google and Microsoft have been working hard to include added interoperability between the two suites. For example, you can open a Microsoft Word or Excel file in their Google counterpart and work on it seamlessly. Microsoft files can also be saved in Google Drive. There are limitations of course, but the fact that both platforms have started working on better interoperability can only be good news for users.
The future of the digital workplace The tech sector has been leading the way in remote working for some time, but even we are going to have to up our game. Just like the resources of the world were brought to bear on the vaccine, so will they be harnessed in making this new globalisation a reality. Issues with long-distance workers will be ironed out, new processes will be developed, software will spring up, and leaders will emerge.
Both Google and Microsoft have been working hard to include added interoperability between the two suites Just as world wars fuelled the need for change and revolution, so has the Covid-19 pandemic. Innovation in the tech sector has accelerated as we’ve been forced to adjust to a new way of living and working. Really, when it comes to remote working, adopting a multi-vendor solution is just the tip of the iceberg, albeit a great first step when it comes to creating a truly global working environment.
Q1 2022 www.datacentrereview.com 27
What’s in store for cloud computing? Amir Hashmi, Founder and CEO of zsah, discusses some current trends in the cloud computing space and the ones we will see continue.
fter a difficult few years for most businesses, cloud computing is still a priority. In fact, it is one of the definitive transformational technologies of our times – with the ability to improve business continuity, efficiency, and scalability. This was genuinely demonstrated during the pandemic, as cloud technology allowed the world’s office workforce to switch to remote and hybrid working easily. So, what is next for this new technology – and what trends can we expect to see into 2022? Multi-cloud strategies Despite many of the big tech providers aiming to offer a single solution, all-in-one service, we expect to see a move towards a diversified cloud management strategy. Many businesses don’t want to rely on one ‘cloud’, as this means depending on one service, one company, and potentially, one data centre – which multiplies risk to business continuity.
28 www.datacentrereview.com Q1 2022
The Flexera 2020 State of the Cloud Report shows that 93% of companies have a multi-cloud model, while 87% have a hybrid cloud approach. We expect to see these numbers continue to grow. AI and the cloud According to Gartner’s Top Strategic Technology Trends for 2021, businesses need a strong AI strategy to ensure business continuity and future potential. Gartner’s position is that most AI projects would fail to move past the prototype or proof-of-concept stage without AI engineering. This shows the intrinsically mutual and symbiotic relationship between cloud tech and AI.
AI doesn’t just complement cloud software; it also improves the stability and efficiency of the physical data centres that operate cloud services AI often enables the cloud to be used effectively and efficiently, and vice versa. For example, the cloud allows low-budget or lesser-skilled people to access the infinitely complex world of artificial intelligence and machine learning. AI increases the efficiency of sorting data gathered from the cloud. Better AI engineering will increase the efficiency of the cloud, and more cloud will mean greater access to AI. AI doesn’t just complement cloud software; it also improves the stability and efficiency of the physical data centres that operate cloud services. These fragile environments need complex and intelligent management systems to keep everything running smoothly. Cloud-based virtual desktops Hybrid working has shown how necessary ‘desktop-as-a-service’ and virtual desktops are – allowing people to access safe and capable machines and programs from anywhere. The benefits of hybrid working, although fiercely discussed, are clear and well-covered, and cloud technology is vital to improving hybrid for everyone. Distributed cloud According to Gartner, public cloud companies are transitioning to location-independent distributed cloud services. In this setup, the public cloud provider maintains, operates, and develops the services but physically provides them at the point of need. It eliminates latency issues and satisfies privacy regulations such as the GDPR that require data storage in a specific geographic location. There are several iterations of distributed cloud, including: on premises, Internet of Things (IoT) edge cloud, metro-area community cloud, 5G mobile edge cloud, and global network edge cloud. These will all be used more frequently in the coming years and decades, and they undoubtedly increase the efficiency of cloud technology. Cloud computing meets edge computing According to Frost and Sullivan projections, about 90% of industrial firms will use edge computing, data analysis and solution development
at the data generation site by the end of 2021. It makes businesses more efficient by reducing latency, cost and security risks. 5G will be a massive boost to the potential of cloud and edge technologies. Many public cloud providers have started shifting workloads to intelligent-edge platforms – such as HP, Nvidia, Microsoft, and IBM – who have all made significant investments in combining edge, 5G, and AI. Skill shortages Although right now it seems less of a pressing issue than a shortage of natural gas or delivery drivers, cloud computing, as with all fourth-generation technologies, is reliant on a highly skilled and undersized labour market. According to one survey, 86% of IT leaders expected cloud projects to slow down in 2020 due to a shortage of cloud talent. Gartner predicts that this trend will extend into 2022 with insufficient cloud skills delaying cloud migrations by as much as two years, if not more. The net result is that more businesses will fall short of their cloud adoption objectives. Therefore, access to cloud-native expertise will be a critical determinant of cloud success. Gartner suggests that companies partner with managed service providers with a proven track record in cloud enablement and management. Naturally, we agree wholeheartedly. The rise of cloud gaming Although these services and technologies are in their infancy, and much of it relies on more cloud, edge, and 5G, we expect to see both demand and supply increase massively in this sector. It can be compared to Netflix and other streaming sites, but for games. The benefits are similar to the move away from DVDs; fewer disks, less cost, less reliance on storage, and greater access. It also has the added benefit of helping against piracy issues. That may be why Mordor Intelligence estimates that the cloud gaming market will grow to $2.7 billion by 2026, at a compound annual growth rate (CAGR) of 15.3% from 2021 to 2026.
Cloud computing, as with all fourth-generation technologies, is reliant on a highly skilled and undersized labour market. More regulatory control To respond appropriately, cloud hosting companies in the UK and elsewhere will need to hire skilled data governance and compliance experts to ensure their firms remain on the right side of the law. Data governance and compliance will become critical areas of concern for CIOs and CISOs. We would be missing a trick in writing about cloud computing trends if we failed to mention the inevitable: regulatory control. The Trump Presidency put a tremendous strain on the relationship between Big Tech and the US Government, and there are similar discords in China. Regulating tech may be one of the few things most politicians can agree on – and cloud technology won’t escape the trend.
Q1 2022 www.datacentrereview.com 29
Wave of the future 2022 is the year of multi-cloud challengers, says David Friend, CEO and Co-founder at Wasabi.
ccording to Flexera’s 2021 State of the Cloud Report, 92% of enterprises have adopted a multi-cloud strategy. This should be of little surprise, given that both multi- and hybrid-cloud allow organisations to benefit from the best-of-breed in respective cloud services. For example, a business might host its front-end web services with one public cloud provider, its email exchange with another vendor, and its document backup services on a third. Multi-cloud also helps teams avoid the risk of lock-in to one particular vendor, and also get to dynamically run whatever workloads or store whatever data on the best cloud at a given moment. In principle, this means organisations benefit from best-of-breed services while also keeping operational expenditure minimal – a win-win in anyone’s book. So, in short, multi-cloud improves an organisation’s performance, cost-effectiveness, and resilience to lock-in. But where to next? I think in 2022 and beyond, a big trend we’ll see is that enterprises will use a greater variety of cloud providers in their multi-cloud arrangements – here’s why.
The size of the hyperscalers has meant that working with them often means complexity – and often to the detriment of your company’s bottom line The question of cost The ‘hyperscaler’ public cloud providers, such as Amazon, Google, and Microsoft, currently dominate the offerings available to organisations that want to go multi-cloud. However, the size of the hyperscalers has meant that working with them often means complexity – and often to the detriment of your company’s bottom line. Take storage as an example: most hyperscalers typically offer various storage ‘tiers’, each with their own set of unique pricing and performance characteristics. These tiers are supposed to help companies choose between ‘hot’ storage (frequently accessed data), ‘cool’ storage (for infrequently accessed data), and archive storage (rarely accessed data). As you might be able to guess, these tiers also mean different prices – hot storage usually implies your data will be stored on SSDs or SMR
30 www.datacentrereview.com Q1 2022
disks, whereas cool and archive often mean traditional hard disks. So the price per terabyte of hot data is normally higher than that per terabyte of cool data by default. This model isn’t great for most companies. It’s very rare that a business will know exactly what the organisation’s hot/cool/archive storage ratio is going to look like for the financial year ahead. Trying to figure out a ratio and forcing the IT team to stick with it can make budgeting a nightmare and also put an artificial constraint on an organisation that may reduce its ability to flexibly respond to whatever issues arise over the course of regular business operations.
Challenger providers don’t need to offer complex and hyper-segmented products to accommodate the diversity of needs they serve In addition, the hyperscalers often charge beyond the paper costs of their tiers, in the form of fees for data egress or making API calls. This can mean steep price rises for merely trying to request access to data, even if an organisation’s overarching storage requirements don’t change. This creates an additional forecasting nightmare for budgets, and another layer of arbitrary complexity for IT departments. The role of challengers Thankfully there are newer and smaller providers that can help teams escape this problem. In contrast to the hyperscalers, such challenger providers abandon tiers and data egress fees, and simply charge per gigabyte stored per month. This means that rather than worrying about allocating storage for the coming year, a business can turn to a challenger provider to provide its ‘reserve’ capacity for the coming year. Additionally, the challenger can also serve as a place to store data that may straddle lines between hot and cold storage. Storage is a good example, but the underlying principles behind this model are applicable to many parts of the multi-cloud environment. In contrast to the hyperscalers, challenger providers don’t need to offer complex and hyper-segmented products to accommodate the diversity of needs they serve: instead, challengers focus on doing one thing very well. That is, they serve a niche. And such niche-serving is the way of the future for the multi-cloud. Rather than the oligopoly of hyperscalers, most businesses are going to gravitate towards a more varied constellation of vendors and partners to focus on servicing respective niches. There’s no question that the hyperscalers are here to stay, but 2022 is going to see it become clear that they’ll not be the only players – just some of the bigger ones in an increasingly diverse pond.
Restricted area Neil Killick, EMEA Leader of Strategic Business at Milestone Systems, outlines eight steps to strengthen your data centre’s physical security.
e’ve all heard that data is the new oil. It’s the most valuable commodity in the 21st century and that makes it a lucrative target. More so in the wake of the Covid-19 pandemic, where data became operationally critical to so many organisations. Our mass shift to remote work, socialising online, eSports, eCommerce and more, has massively increased the amount of data for businesses to pore over — and malicious actors to steal and exploit. A time of great growth Furthermore, the rise in online usage has led the data centre sector to experience explosive growth. This will only continue with advances in cloud computing, the blockchain, artificial intelligence (AI) and 5G all requiring fast and powerful computing power. Yet, with this, the number of data compromises is also on the rise, on track to hit a record-breaking year in 2021 with an increase of 17% compared to 2020.
32 www.datacentrereview.com Q1 2022
The need for robust physical security That’s why data centre leaders are doubling down on their security. But although many are used to spending significant sums on their cybersecurity, physical security has traditionally been overlooked. The most advanced firewall will be little defence if someone gains physical access to a server room. Next-gen video analytics is helping data centre leaders to protect their assets, equipment and sites, with comprehensive solutions that ensure nobody can get close. Indeed, the physical security market for data centres is enjoying rapid growth. It’s expected to grow at a CAGR of over 7.42% during the period 2020 to 2026. In part, this is because of the growth in data centre construction, but also due to increased demand in more complex ways to protect against intrusion, corporate espionage, internal sabotage and natural disasters.
many different solutions available, how can you ensure the systems and processes you choose are the right ones?
Eight steps to boost physical security Investing in the right technology is evidently key to protecting a data centre. But with so
2. Ensure you have redundant utilities Your data centre needs redundant utilities (like electricity and water) to avoid common-mode
1. Understand your current position When you begin thinking about your physical security strategy, it’s worth understanding the current state of affairs. What are your current security system’s strengths and weaknesses? Make a list of all the equipment that needs protection and decide to what level (a high-risk area full of servers with personal data will need different security levels to a communal staff area, for example.) Additionally, identify all current employees who have access to high-risk areas. Make sure there’s a process to regularly check this employee list and remove anyone who shouldn’t have access (due to changes in their role, or if they leave.)
failures and downtime. It’s also worth monitoring and controlling the air quality, temperature and humidity in your centre. Particularly in sensitive areas around racks and servers, where cooling systems could be hacked and exploited to disrupt services. To go a step further, consider using Internet of Things (IoT) sensors to proactively monitor equipment performance and warn of possible failures or downtime. Although this may sound like a Hollywood mainstay, the risk of sabotage to cooling systems is a very real-world possibility. A recent demonstration using a simulated data centre focused on dismantling an HVAC (heating, ventilation and air conditioning) system’s pumps, valves and fans. The security experts behind the simulation warned data centre leaders about the potential of this exploit during times of extreme weather and to spike temperatures in server rooms. Ultimately this could help them gain access to sensitive data or hold a data centre to ransom. 3. Secure your perimeter The best perimeter controls make sure nobody unauthorised can even get close to your data centre. These include: • Sensors along fencing and boundary lines to alert to when it is tampered with or crossed. • CCTV cameras to monitor boundaries. • Radar technology to detect possible intrusion at a longer distance. • Thermal cameras to detect the heat signatures of possible intruders. • Automatic licence plate recognition (ANPR) to identify vehicles approaching a site. • A video management system (VMS) to consolidate the different inputs and support analytics to issue alerts for potential intrusion. 4. Access control Using access control provides an additional layer of protection so if someone gains access to your site, it will prevent them from entering a building. It can work in tandem with CCTV and video analytics to provide multi-factor authentication and further protection. For example, facial recognition can automatically identify authorised personnel and allow them entry. You may also wish to consider anti-tailgating and anti-pass-back facilities that will ensure only one authorised individual and vehicle passes into a complex during a specific time. Your security team should also use access lists to log all entry and exit, visitor logs
Ideally, security teams should be able to pinpoint the locations of contractors and visitors in real-time and contractor management that monitors the movement of all third-party personnel throughout the site. Ideally, security teams should be able to pinpoint the locations of contractors and visitors in real-time either through wearable trackers or advanced video analytics. 5. Internal protections If someone does manage to circumnavigate your perimeter and access control systems, the next layer of protection is your internal security systems. These work to track an intruder within a building, protect your most high-risk areas, and record all activity if an investigation is required later on. The technology to consider here includes video surveillance, infrared tripwires, mantraps, and other connected IoT devices that can reduce the risk of intrusion, detect emergencies like fire and flood, and preempt equipment failure. To work effectively, video surveillance needs to show security teams everything happening within a building. Full visibility will help them identify what ‘normal’ looks like in day-to-day behaviour and quickly identify when something is amiss. In the event of a security breach, visual identification of an intruder should be easily possible through video and audio feeds. Depending on your needs, you may also wish to invest in control room technology like a smart wall to help operators understand all activities on-site and facilitate quick decision making. Again, a VMS will prove invaluable in managing all video, audio and sensor data coming in from your internal protection system. 6. Security and control room staff Once your technology is in place, you need to consider your staffing requirements. As a minimum, you need 24/7 coverage. Video analytics can do a lot of the heavy lifting but you will still need frontline security support to respond to any incidents plus operators to communicate key details to them. It’s worth investing in an intuitive system that your team can begin using almost instantly, with minimal ongoing training. Whatever your team setup, the goal is to make response times for any suspicious activity
or emergency, as short as possible. 7. Wider workforce training Your wider workforce also has a critical role to play in protecting your data centre. From preventing people tailgating them through an entrance to hiding passwords at their workstations, your wider workforce needs to understand all current security measures and their responsibilities towards them. They should also be aware of any particularly common or new threats. 8. Review, test and upgrade All security systems require regular checks and maintenance to continue protecting your site. At least once a year, check all of your devices and systems to ensure they’re working as expected. At least each quarter, review the latest threats and assess your security against them. Also, keep an eye out for new technology that may further improve your physical security. Consolidating different devices Traditionally, the physical security market for data centres has been highly fragmented with disparate systems that make it difficult for security teams to have a complete overview of all happenings on-site. However, an open VMS is able to consolidate all different data streams from CCTV, access control, perimeter protection and more, so operators can remain fully updated with who is on-site, where, why and when. Conversely, another benefit to having an open system is in the choice of devices supported. This ensures the system meets the needs of each data centre (and its unique risks) and tenant in a consistent way. It also allows for best-in-class solutions to be implemented instead of being locked into just one vendor, and for future proofing for emerging needs, threats and new devices. Choosing an open system like Milestone will help with futureproofing your physical security system, making it easier to deploy new solutions and experiment with emerging technologies. Giving you peace of mind that malicious actors will be kept well away from your data, today and tomorrow.
Q1 2022 www.datacentrereview.com 33
Building edge infrastructures faster Speed is of the essence. Due to the industrial transformation, manufacturing companies are having to build new IT and OT infrastructures for edge applications faster than ever before. The prerequisites are rack systems based on a comprehensive modular range of variants that can implement individual solutions simply, safely and in a future-proof way. The answer to these demands is the new Rittal VX IT rack system.
ew technologies such as the Industrial Internet of Things (IIoT), Artificial Intelligence (AI) or 5G all demand reliable IT and OT infrastructure systems – directly where production is taking place – for edge applications. For companies to remain competitive, the IT and OT infrastructure must be quickly set up or expanded to help provide the computing power required. Rittal offers help in this with its new VX IT rack system. This innovation is a pacesetter for the rapidly emerging new and future-proof IT and OT infrastructures.
A state-of-the-art enclosure platform is setting the pace A new generation of IT racks – in the shape of
34 www.datacentrereview.com Q1 2022
Rittal’s VX IT – has been developed. It is the world’s fastest IT rack. Conceived as a universal and modular variant kit, the solution can be used as a network and server enclosure in a variety of edge applications. An additional factor and benefit for customers is that all VX IT variants designed with the configurator have already been tested and certified with all their components in accordance with international standards such as UL 2416, IEC 60950 and IEC 62368. This means there is no need to additionally certify the finished, configured system. This ensures maximum freedom and peace of mind when assembling new IT infrastructures. With this solution, IT managers can save valuable time in planning and procurement, while at the
same time being assured that all the components work in perfect harmony. An online configurator guides the user stepby-step through the selection of components and it directly performs a plausibility check: https://www.rittal.com/vx-it/en/it-rack. Digital ordering process With VX IT, companies can implement new infrastructures at unprecedented speed, from a single network rack in the production building to a complete edge data centre in distributed production facilities. For this purpose, Rittal maximises the full digitalisation potential to the benefit of its customers. The entire process from selection, configuration and ordering through to delivery is
digitally supported and transparent. During configuration, the 3D model is assembled piece by piece, including the accessories. The finished 3D model is available for reuse by the user. The individually designed version of the IT rack is produced in a very high quality in a state-of-the-art manufacturing facility. Optimised logistics mean it is delivered quickly and on time. Maximum compatibility and future-proof VX IT offers compatibility with existing Rittal RiMatrix systems and other IT infrastructures assembled with Rittal components. This makes it possible to replace individual components in existing data centres and selectively expand data centres. For example, companies can expand existing RiMatrix installations with the new VX IT and use VX IT-specific components for cooling, UPS or monitoring. This provides investment security for present Rittal data centres. Rapid tool-free assembly Anyone working in a data centre will want a cleverly designed and easy-to-use solution. In developing VX IT, Rittal has consistently followed this realisation. This IT rack is assembled almost entirely without tools with a time-saving snap-in system. Labelled height units and pitch patterns in the depth make it easy to comply with the 482.6 mm (19in) distance between levels. All panels such as side panels and roof
With VX IT, companies can implement new infrastructures at unprecedented speed, from a single network rack in the production building to a complete edge data centre in distributed production facilities
The new VX IT rack is a solution that can be used anywhere. It is available in a modular format for even greater freedom for the rapid deployment of data centres
Source: Rittal Limited
plates are quickly and easily attached with snap fasteners and positioning aids. The new vertically divided side panels, available as an optional accessory, give improved access for faster installation and servicing. The vertically divided side panels are equipped with simple hinges and can be opened like a door while also being easy to remove. Horizontally divided side panels are also available to specify and these again simplify access to, for example, servers. Load capacity up to 1,800 kg Another key feature of VX IT is its great stability: thanks to an improved frame design,
the rack’s 19in vertical sections are more robust than in previous racks. The load capacity has been verified by in-house testing at Rittal as well as external certification by Underwriters Laboratories (UL). There are two variants available: the ‘VX IT standard’ rack supports a static load of 1,500 kg according to Rittal tests, or 1,200 kg according to UL certification. The ‘VX IT dynamic’ version supports a load of 1,800 kg according to Rittal’s own testing process, or 1,500 kg according to UL certification. Everything a rack needs A wide range of accessories is available for VX IT so that it can be specifically customised. This includes options for the doors and side panels, base and roof, as well as other innovations such as the new LED strip for status display. Other accessories include extensions and cable management tools, as well as solutions for monitoring, power supply and asset management in the IT rack. For interior installation, components such as PDUs, UPS systems, IT cooling systems and monitoring solutions are available, as well as modules for early fire detection and extinguishing.
Q1 2022 www.datacentrereview.com 35
Industry Insight: Be prepared How can data centres stand up to fuel supply shocks, asks Dom Puch, UK Data Centre Lead at Turner & Townsend.
The gas supply crisis in Europe and the UK has exposed the fragility of fuel supplies, and their significant impact on economies and business. A major reduction in availability has challenged our national resilience and increased our sensitivity to price increases, not least in our predominantly gas-fired electricity generation sector. This is not the last time we’ll face such a situation. Since the crisis first emerged, concern has continued to grow over rising energy costs, driven by geopolitical tensions and increased demand for energy in the short term, and the global need to meet net zero targets by 2050. While many data centre operators will have struck long-term deals with electricity suppliers to insulate them from raw material price hikes, the high electricity demand of these facilities could make them very vulnerable to this price volatility – particularly in light of increasing media and political scrutiny over their vast power demands. In the depths of winter, a cold snap could be all it takes for energy demand to exceed supply, triggering emergency measures such as limiting energy usage in industry or even domestic settings. Data centres could be among the hardest hit if policymakers decide to reprioritise the recipients of limited energy supplies. Data centre owners and operators must act now to improve their operating efficiency, mobilise on-site electricity generation and grow their storage capacity to ensure they are resilient and able to meet ever-increasing demand from consumers.
Performance data should be collected and stored in a standardised way, so meaningful trends can be identified and issues anticipated Improving operating efficiency The obvious way to build resilience is to reduce the overall need for electricity through enhanced operating efficiencies. The way operators measure and optimise energy usage is key here. As many data centre operators are already demonstrating, performance data should be collected and stored in a standardised way, so meaningful trends can be identified and issues anticipated. This should be married with clear and agreed timescales for review and analysis. Operators should also be considering effective strategies for load shedding, so that energy usage can be matched to supply and in times of limited electricity input, usage can be reprioritised. Only with total visibility can operators act quickly and decisively. In the longer term, technology developments are moving ahead at an unprecedented speed, and the ongoing work from chip developers to explore a new topology for processors could pave the way for much more efficient operation of data centres. Pressures on finding sufficient available land and meeting energy demand would be significantly eased were this to be rolled out at scale. Reinvigorating on-site generation Most data centres have very little on-site self-generation capacity to supplement their electricity demand. What capacity does exist is more
36 www.datacentrereview.com Q1 2022
often found through diesel-fired generators in the UK, and gas turbines in Ireland. These are short term carbon-intensive solutions, with diesel storage in particular limited to up to 72 hours of full capacity usage. To provide the sector with the resilience it needs during fuel supply shocks, and to meet the requests of some leading markets for data centres to bring more of their own power solutions to the table, on-site generation will need to be significantly upscaled. While acting on this swiftly, data centre owners and operators also need to plan ahead and ensure that any future generation capability itself is sustainable – otherwise, they will find themselves back at square one, unable to source the fuels for continuity of electricity supply. Renewable energy sources such as solar and biomass could be gradually implemented into the overall supply mix for generators to achieve this, however any new technologies required for delivery will have commercial implications. Bolstering storage capacity Power generation is only one element of this equation – the other is fuel storage. Data centres are going to become increasingly reliant on this in future supply crises or shortages. The long-term solution is likely to lie in better battery storage, which can de-risk fluctuation in the current gas-led network and support the transition to wind and solar power. The commercialisation of this technology at a scale needed for data centres is still some way off but investment within the UK and Europe in the battery ‘gigafactory’ industry is growing. In the outlook for our Data Centre Cost Index 2021, we anticipated that investment in battery capacity was going to be a major focus for the sector going forward, and with the burgeoning electric vehicle industry there is going to be stiff competition for lithium resources. Sourcing an alternative As the dual pressure of lack of fuel supplies and need to decarbonise the sector coincide, any efforts by the industry to bolster diesel storage
capacity as a short-term, quick fix for back-up power will destabilise carbon reduction plans. With 40% of respondents to our latest Cost Index survey believing that net zero carbon data centres are possible within five years, greener alternatives need to be looked at more carefully. Recycled biofuels could be a better alternative, and one emerging option is Hydrotreated Vegetable Oil (HVO). The fuel boasts a 90% reduction in CO2 emissions, and it can be used immediately with existing generators, requiring minimal operational investment to make the switch. While HVO is roughly 25% more expensive than red diesel, the end is near for cheap diesel with restrictions on its use in non-military or agricultural settings coming into force from April 2022 – so now is the time to explore HVO and other green alternatives.
Recycled biofuels could be a better alternative, and one emerging option is Hydrotreated Vegetable Oil (HVO) Long-term thinking No sector can defer energy consumption until the future arrives. With price volatility, current reliance on fossil fuels and an urgent requirement to drive net zero, the data centre market needs to consider its power options. In the short term, operators should be exploring how they can maximise efficiency and improve resilience to help hedge against rising energy prices and associated supply risks. The sector’s long-term playbook is exploring a reduction in power requirements through server technology and exploring which renewable energy sources can form an effective part of operators’ storage and on-site generation mix. We don’t have all the answers yet, but it’s vital we act now and start finding those answers to ensure a sustainable future for data centres.
Q1 2022 www.datacentrereview.com 37
If you sell products or work on projects within Power, Lighting, Fire Safety & Security, Energy Efficiency and Data Centres, make sure you enter:
Visit awards.electricalreview.co.uk Your business and your team could be celebrating in the spotlight at the ER & DCR Excellence Awards Gala Dinner on May 19, 2022 at the breathtaking Christ Church Spitalfields in London!
Following the success of our second annual Excellence Awards in 2019, we invite electrical manufacturers, contractors and project owners to enter the 2022 Awards. The recipients of 2019 Awards included: TXplore robotic transformer inspection service – Entry by ABB Power Grids; UPS, temperature control, monitoring and diagnostics at Cineca, Italy - Entry by Vertiv; BS67 Smart Ceiling Rose - Entry by Adaptarose Ltd; University of Northampton - Entry by Simmtronic Lighting Control; The Hot Connection Indicator - Entry by Safe Connect; Combined heat & power at DigiPlex Stockholm data centre - Entry by DigiPlex; Refurbished servers at WINDcores, Germany - Entry by Techbuyer; HyperPod Rack Ready Data Centre System - Entry by Schneider Electric; 4D Gatwick cooling upgrade - Entry by 4D Data Centres Ltd; Green Mountain Colocation Supplier - Entry by Green Mountain
SPONSORSHIP OPPORTUNITIES AVAILABLE
Contact us +44 (0) 207 062 2526 The Awards include the following categories: Power – Product of the Year – Sponsored by Omicron Power – Project of the Year – Sponsored by Omicron Lighting – Product of the Year Lighting – Project of the Year Fire Safety & Security – Product of the Year Fire Safety & Security – Project of the Year Energy Efficiency – Product of the Year Energy Efficiency – Project of the Year Innovative – Project of the Year – Sponsored by Aico Sustainable – Project of the Year Data Centre Design & Build – Product of the Year – Sponsored by Centiel Data Centre Design & Build – Project of the Year Data Centre Cooling – Product of the Year Data Centre Cooling – Project of the Year Data Centre Colocation – Supplier of the Year - Sponsored by Vertiv Technical Leader of the Year Consultancy/Contractor of the Year – Sponsored by ECA Outstanding Project of the Year – Sponsored by Riello UPS Outstanding Product of the Year – Sponsored by Riello UPS
Visit the website to check out this year’s awards and submit your entries by March 6, 2022.
his 35kW-1MW Computer Room Air Handler (CRAH) is an evolution of the award-winning SmartCool precision cooling range, developed to meet demand for ultra-efficient, large capacity precision cooling systems in colocation and hyperscale data centres. The SmartCool ONE precision cooling system has a cooling capacity of up to 1MW, optimised air and water conditions and an intelligent controls platform to maximise efficiencies and cooling power. With SmartCool ONE, 1MW means 1MW. The latest high capacity backward curved EC fans have been included in an underfloor deck to deliver the powerful air flow required to serve the largest facilities in the world. www.airedale.com/products/precision-ac/ smartcool-one/
he Airedale DCS range comprises enhanced chillers, specifically engineered to meet the demands of the data centre industry. Specifically engineered by Airedale’s DCS Team in Leeds UK, DCS chillers deliver powerful performance, without being power hungry. Airedale’s Enhanced Free CoolingTM method can deliver up to 39% energy savings over standard methods. Airedale DCS engineers have re-engineered the V-BLock evaporator coil arrangement to incorporate a huge five-row free cooling coil. This advancement in free cooling technology places the DCS chiller range at the forefront of greener data centre cooling solutions. www.airedale.com/products/dcs-chillers/
IQity Qity, Airedale’s IoT-enabled technology framework, delivers unparalleled uptime and efficiency benefits by connecting smart building software and hardware in an entirely unique way. IQity works at a product, system, and site level to make sense of your critical systems and step in when you need a hand. It is the only framework that manages normal building cooling 24/7, an emergency in real-time and gives the breadth of data necessary to prevent threats and protect your bottom line. It is not just a software system on its own. Rather a philosophy and vehicle that enables Airedale to apply its products and software in a way which delivers unparalleled efficiency and uptime benefits to high-stakes, critical industries. www.airedale.com/products/iqity/
40 www.datacentrereview.com Q1 2022
Airedale Cloud Diagnostics
loud Diagnostics is a cloud-based monitoring and diagnostics platform developed for owners of mission critical HVAC plants. Airedale’s extensive field experience has been leveraged, along with leading-edge data science, to develop powerful diagnostic tools, including a ground-breaking refrigerant leak detection algorithm. The solution allows HVAC products to be connected, monitored and analysed via a secured communication channel to the Airedale Cloud Diagnostics cloud. It provides a live dashboard with alerts, ongoing performance analysis and predictive maintenance, with a ‘Smartboard’ interface that visualises real-time data. The solution uses machine learning to analyse the performance of a unit over time and recognise ‘failure patterns’, warning the user of a potential failure before it happens. www.airedale.com/products/cloud-diagnostics/
The Rittal VX IT rack system: building edge infrastructures faster ittal’s new VX IT rack system is a pacesetter for the rapidly emerging new and future-proof IT and OT infrastructures. Conceived as a universal and modular variant kit, the solution can be used as a network and server enclosure in a variety of edge applications. An additional factor and benefit for customers is that all VX IT variants designed with the configurator have already been tested and certified with all their components in accordance with international standards, such as UL 2416, IEC 60950 and IEC 62368. This means there is no need to additionally certify the finished, configured system. This ensures maximum freedom and peace of mind when assembling new IT infrastructures. With this solution, IT managers can save valuable time in planning and procurement, while at the same time being assured that all the components work in perfect harmony. An online configurator guides the user step-by-step through the selection of components and it directly performs a plausibility check: www.rittal.com/vx-it/en/it-rack. www.rittal.com • firstname.lastname@example.org
Future Facilities launches 6SigmaDCX
uture Facilities has launched Release 16 of its Computational Fluid Dynamics (CFD) software and digital twin technology, 6SigmaDCX. The updates will deliver improved performance, accuracy, and speed to support a broad range of data centre teams in decision making and achieving sustainable success. The new 6SigmaDCX offers a range of features to enhance data centre design and operations. The key design highlights include: • Enhanced View developments which enable users to showcase photorealistic results in reports and facilitate animation creation • Improvements to the 1D Flow Network to ensure fast, detailed thermal flow analysis of the liquid and air flow routes • New result views and object configurations, such as Isosurfaces, which deliver innovative ways to visualise external models and empower risk analysis at any stage of design • The ability to import BIM models directly into 6SigmaDCX for faster model creation • Performance improvements, namely Future Facilities’ new back tracing method, ensuring solar radiation calculations solve up to 30 times faster. www.futurefacilities.com • email@example.com
Q1 2022 www.datacentrereview.com 41
Times they are a-changing Davide Villa, Business Development Director EMEAI at Western Digital, looks back at the history of the data centre and how data storage has shifted with the times. espite their modern implications and future forward technologies, the data centre is much older than it lets on. Data centres date back to the 1940s, when the first computer designated rooms became home to large military machines that were set to work on specific data tasks. Then, in the 1970s, as Unix, Linux, and IT became more prominent, specific rooms with equipment and networking would popularise the name “data centre”. With the 1960s came the mainframe computer. Remember the episode in Mad Men when the ad office loses its lunchroom to the colossal computer? Well, this happened all over the world, with IBM leading the charge, filling dedicated mainframe rooms in large organisations and government agencies. Indeed, in some cases, these increasingly powerful and expansive machines needed their own free-standing building, which were to become the first data centres. With the 1980s, PCs were introduced that were typically connected to remote servers to enable them to access large data files. By the time the internet became ubiquitous in the 1990s, internet exchange (IX) buildings had sprung up in key cities to serve the needs of the World Wide Web. These IX buildings were the most important data centres of their time, serving most people’s needs. Since then, the need for data storage has grown in lockstep with storage innovation, which has become a critical factor. Storage devices were manufactured into many form factors to fit the needs of the data centre and ultimately helped to power its incredible growth over the decades.
The start of storage To understand the role of storage in the data centre, a brief glance back in time is the place to start. After their inception in 1956, hard disk drives became the preferred non-volatile device for computing, which was still the case when AOL created the first modern data centre in 1997 at the start of the dotcom bubble. This kickstarted a boom in data centres, with companies using their remote servers to quickly get their websites online. However, as more data was created and captured, CPU speeds hit the roof, churning through the information more quickly. And with this, the industry was galvanised into action to accelerate storage to the speed of compute. With no blueprint for how to address this industry challenge, storage took on a variety of new forms. Over the years, experiments with semiconductors led to the adoption of SSDs for the enterprise sector. Then came the evolution of SATA connections to PCIe and the emergence of M.2 slots, as well. Today, there are five major form factors used in data
42 www.datacentrereview.com Q1 2022
centres, a marked expansion from the one that started it all. Looking back through the decades of data centre evolution, there are some common themes facing storage developments; demands for speed, constraints on storage, and a willingness to try anything to limit bottlenecking. While CPUs remained iterative, storage had to shape its own path. Much like clay in the hands of a sculptor, over the years storage has been moulded into every conceivable shape and size that can be imagined. From the spinning disk to slotted memory and beyond, these innovations have been the cornerstone for flexibility in the data centre. That flexibility is also the data centre’s primary strength. HDDs and SSDs coexist while serving different purposes and, with access to both, data hubs can find a balance between cost and speed limitations. Today, the major players in the enterprise sector, the behemoths of cloud service providers, are looking to build data centres using custom components, and engineers are working hard to meet these use-case specific requests. Combined with the unrelenting rate of data creation in the world, this means that storage continues to evolve in step with the digital world. The pace of change Change is moving at a good pace – faster than ever before. To keep up, engineers are currently tinkering with 13 new form factors, more than double the five that are currently in use. How many will make it to mass production? To avoid a bottleneck, it’s essential the storage speeds synchronise with the speed of computation. The new E1.S and E1.L drives are the front runners, as they are suited for hyperscale data centres and high-capacity use cases, respectively. But emergent use cases may make one of the other 11 form factors a better contender for mass production. It’s anyone’s guess. While technology enables new solutions, the key driver is the exponential growth of data. Our increasingly automated, digitised lives leave us entrusting all our files into the ether. Yet maintaining the data centres that hold all this is anything but simple. Along with building, running, and chilling these massive data centres, the physical media on which the data is stored requires constant upkeep. And with extra storage capacity being added all the time, tending to it all is an increasing burden for cloud computing providers. This demand for cloud applications has surged during coronavirus. According to the property company Knight Frank, take up of data centre capacity almost doubled in cities such as Madrid, Warsaw and Milan compared with 2019. Data hub mergers and acquisitions totalled almost $35bn globally in 2020, more than five times the volume of deals in 2019 and $10bn ahead of the previous annual record set in 2017. Data storage needs to continue to shapeshift safely, intuitively, and cost-effectively to best support the data it serves. Without the versatility of the storage components, the data centre today would look radically different, proving the importance, and persistence, of storage. In the end, data must be stored. In this industry, it’s the one element that’s here forever.