DCR Q4 2022

Page 1

Q4 2022

www.datacentrereview.com

Improved Data Centre Resilience and Efficiency is a cool outcome from the Schneider Electric upgrade at University College Dublin…

Find out how Schneider Electric and Total Power Solutions, Ireland has: • • •

Transformed IT services round the clock Released valuable space for student amenities Improved facility reliability and efficiency at UCD.

se.com

AI & Machine Learning Can AI play a role in cyber security?

12

Event Preview

Industry Insight

Don’t miss Critical Insight, 22 - 23 November

How the sector can go green

VIRTUAL EVENT 22-23 NOVEMBER 2022

28

34



News 04 • Editor’s Comment Insight into the future

Contents

06 • News Latest news from the sector.

Features 12 • Artificial Intelligence & Machine Learning Dustin Rigg Hillard, Chief Technology Officer at eSentire, asks – when does AI actually provide value when it comes to cyber security?

14 • Data Centre Design & Operations Louis McGarry, Sales and Marketing Director at Centiel UK, explores what data centre operators should consider to ensure their critical power protection is ready for the future.

24 • Cooling Data centre water use is once again under the microscope. Adam Yarrington, Business Unit Leader, Director – Data Centres at Airedale by Modine, asks: are you ready to change course?

12

30 • Edge Computing

06

Dom Couldwell, Field CTO at DataStax, explores how to get data consistency across edge and centralised data sets, and the value that can be achieved out of edge data.

Regulars 34 • Industry Insight Chris Pennington, Director of Energy and Sustainability at Iron Mountain, explores how clean energy can help drive data centres towards a sustainable future.

36 • Products

24

Innovations worth watching.

38 • Final Say Billy Durie, Global Sector Head for Data Centres at Aggreko, discusses the potential for flexible energy models to help navigate the global energy crisis.

18 34

30


Editor’s Comment Insight into the future Somehow, we’re staring down the end of 2022. This year has felt a bit like an exercise in desensitisation. Just when we couldn’t think it could get any worse, the disaster factor got dialled up a notch. With the recent antics going on in government stealing the headlines, we can’t be distracted from the looming winter energy crisis facing the UK. It’s going to hit both consumers and businesses hard. Families are justifiably worried about the cost of warming their homes this winter. The National Grid has warned of potential energy blackouts headed the country’s way due to the electricity supply problems we currently find ourselves in. Data centres across the country will be turning their focus towards disaster-proofing facilities ahead of possible outages – which, as we already saw during the summer heatwave, can have dire consequences. This is just one of the many issues that we find the sector facing as 2022 draws to a close. In an industry that has so many plates spinning in the air at once, there’s no better time for the inaugural Critical Insight to take place. This online two-day event will give a platform to key industry insiders to explore the challenges coming our way, and how we can confront them head on. Taking place 22-23 November, you’ve still got time to register at www.critical-insight.co.uk. And of course, I’ll be there to keep things ticking along – I’m looking forward to seeing you there. Please don’t hesitate to reach out to me at kayleigh@datacentrereview.com, and of course, don’t forget to join us on Twitter (@ dcrmagazine) and on LinkedIn (Data Centre Review). Kayleigh Hutchins, Editor

EDITOR

Kayleigh Hutchins kayleigh@datacentrereview.com

CONTRIBUTING EDITOR

Jordan O’Brien jordano@sjpbusinessmedia.com

DESIGN & PRODUCTION

Alex Gold alexg@sjpbusinessmedia.com

ACCOUNT MANAGER

Kelly Baker +44 (0)207 0622534 kellyb@datacentrereview.com

GROUP ACCOUNT DIRECTOR

Sunny Nehru +44 (0) 207 062 2539 sunnyn@sjpbusinessmedia.com

GROUP COMMERCIAL DIRECTOR

Fidi Neophytou +44 (0) 7741 911302

fidin@sjpbusinessmedia.com

PUBLISHER

Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: subscriptions@electricalreview.co.uk SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register Electrical Review is published by

2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media

Follow us on Twitter @DCRmagazine

Join us on LinkedIn

4 www.datacentrereview.com Q4 2022



EDIT

News

ATLASEDGE ACQUIRES DATACENTER ONE

Image credit: BAIVECTOR / Shutterstock.com

The latest highlights from all corners of the tech industry.

AtlasEdge has announced the acquisition of German data centre provider, Datacenter One (DC1). The acquisition includes two data centres in Stuttgart, one in Dusseldorf and another in Leverkusen, with further sites under construction. The newly-acquired facilities will join AtlasEdge’s existing locations in Berlin and Hamburg. DC1’s senior management team will remain in place following the acquisition, with Wolfgang Kaufmann, CEO at DC1, to join AtlasEdge’s management team. “This is a highly strategic transaction for AtlasEdge,” said Giuliano Di Vitantonio, CEO at AtlasEdge. “Germany is an important part of our expansion plans and a market that has seen customer demand rise across multiple metros. Acquiring DC1 transforms AtlasEdge into Germany’s leading distributed platform, with ready-to-sell capacity in key locations across the country. We look forward to working with our new colleagues and continuing to grow our footprint across the continent.”

UK gov holds talks with data centres to avoid winter outages he UK government has held discussions with data centre operators about using diesel fuel to keep facilities running if needed during the winter energy crisis, according to reports. First reported by Bloomberg, government officials are considering allocating diesel for backup generators if the National Grid needed to cut power, as well as whether data centres should be designated as critical national infrastructure (CNI). Data centre operators involved in the talks have reportedly requested that the government allow sites to start up diesel generators 15 minutes before an expected blackout to allow a safer transference of power, as well as potentially using data centre backup systems as part of demand side response to ease pressure on the grid at peak times. Matthew Evans, Markets Director at techUK, told Bloomberg, “Our members have taken all necessary precautions by filling up their reserves, but we need to see the government take necessary measures to ensure a continuous supply in the unlikely event of prolonged blackouts.”

T

6 www.datacentrereview.com Q4 2022

Datum acquires Teledata UK Datum Datacentres has acquired Manchester-based colo data centre, Teledata UK. Launched in 2012, Datum was acquired by UBS Asset Management’s Real Estate and Private Markets business in September 2021, with the intention of expanding into key regional markets – of which the acquisition of Teledata is the first. Datum already has a current site located in Farnborough. The Teledata facility is currently fully utilised and provides room for expansion into an adjacent building on the site. UBS and Datum were advised by Arup Corporate Finance and Pinsent Masons. The vendors were advised by Blacksquare Advisory and Addleshaw Goddard. Dominic Phillips, CEO of Datum said, “Manchester has always been our primary target market for expansion and the region continues to show strong growth. Teledata has had a presence in the Manchester market for many years and its recent rapid growth consolidates this position. With an additional fully powered and adjacent site secured for immediate development, we look forward to providing further quality colocation capacity to the Manchester market.” Matt Edgley, Commercial Director at Teledata, added, ”The acquisition is fantastic news for the Teledata team and for the Manchester region. Having worked closely with the Datum management team previously, I know that we’re in the ideal position for Teledata and our clients to benefit from an aligned approach to quality, the environment and service levels as we continue to invest in and deliver the highest quality sites in the region, with the strong backing and commitment of UBS.”


NEWS

Telehouse International has begun construction on a second data centre at its TH3 Paris Magny campus in France. The 12,000 m2 site will provide 18 MW of power and is scheduled to go live in October 2023. The new data centre will have a direct connection to TH2 Paris Voltaire via Telehouse Metro Connect’s fibre optic links of up to 400 Gbps. Construction utilised sustainable technologies, such as free chilling integrated with air-cooled chillers to minimise water consumption and reduce electricity consumption, to enable the new site to achieve a PUE of less than 1.3. According to Telehouse, the expansion is part of an investment plan of €1 billion over five years, of which €50 million has been allocated to the opening of the new data centre at the TH3 Paris Magny campus.

Castrol to build immersion cooling test facilities P subsidiary Castrol plans to build an immersion cooling test and development facility for data centres at its global headquarters in Pangbourne, UK. Castrol will be working in collaboration with Submer to develop new immersion fluids, and will install Submer’s SmartPod and MicroPod tank systems, which have been adapted to test new fluids and new server equipment. Earlier this year, the companies announced a partnership to encourage the adoption and development of immersion cooling technology to enable more sustainable data centre operations. The new site will also be used to develop and test methods to capture and reuse the heat from data centre operations to further

B

increase operational efficiency. The project comes as part of BP’s earlier announcement that it will invest up to £50 million to set up a new battery test centre and analytical laboratories at the same site. “Immersion cooled data centres could bring huge gains in performance and big reductions in energy wasted in cooling. Together, Submer and Castrol aim to deliver sustainable solutions as demand for computer power continues to surge. This investment in proven Submer systems is a key step towards joint development with the goal of enhancing performance and improving data centre sustainability even further through integrated energy solutions,” said Rebecca Yates, BP’s Technology Vice President, advanced mobility and industrial products.

Image credit: IONOS

IONOS & Fasthosts unveil data centre in Worcester IONOS and its UK subsidiary Fasthosts have opened a new data centre at Worcester Six Business Park, Worcestershire. The 30,729 sqft modular data centre – situated on a 43,708 sqft site – represents a £21 million investment in the local area. The facility will use HVO diesel for its generators and features solar photovoltaic panels, which will account for up to 10% of the energy use at the site. According to a statement from IONOS, “All necessary carbon used for the construction of the building envelope has been compensated.” Plans for the site were originally put forward in 2020, and work began on the facility in Q1 2021. IONOS and Fasthosts have now begun migrating data from their current data centre, located in Gloucester. Henning Kettler, Chief Technology Officer at IONOS, said, “We are delighted to officially announce the opening of our new site in Worcester, which demonstrates our ongoing commitment to our customers’ needs, the UK market, and ongoing investment into infrastructure and jobs within the industry. “Not only will the centre host one of the largest cloud platforms in Europe, but we’re incredibly proud of the features which have created the most modern, environmentally friendly IONOS data centre to date.” Simon Yeoman, Fasthosts Chief Executive Officer, added, “It’s fantastic to be able to bring our customers along with us as we take a big step into the future with the launch of this state-of-theart data centre. “In setting up our new Worcester data centre, we are now in the process of migrating our existing Gloucester data centre to the new location.”

Q4 2022 www.datacentrereview.com 7

Image credit: Castrol

TELEHOUSE ANNOUNCES PLANS FOR SECOND PARIS DATA CENTRE


SPONSORED FEATURE

Improved data centre resilience and efficiency is a cool outcome from Schneider Electric upgrade at UCD The Future Campus project at University College Dublin (UCD) called for space utilised by facility plant and equipment to be given up for development to support the student population. Total Power Solutions, an Elite Partner to Schneider Electric, worked with UCD’s IT services organisation to upgrade its primary data centre cooling system, to provide greater resilience for its HPC operations whilst releasing valuable real estate.

8 www.datacentrereview.com Q4 2022


SPONSORED FEATURE

U

niversity College Dublin (UCD) is the largest university in Ireland, with a total student population of about 33,000. It is one of Europe’s leading research-intensive universities with faculties of medicine, engineering, and all major sciences, as well as a broad range of humanities and other professional departments. The university’s IT infrastructure is essential to its successful operation, for academic, administration and research purposes. The main campus at Belfield, Dublin, is served by two on-premises data centres that support all the IT needs of students, faculty and staff, including high-performance computing (HPC) clusters for computationally intensive research. The main data centre in the Daedalus building hosts all the centralised IT including storage, virtual servers, identity and access management, business systems, networking, and network connectivity, in conjunction with a smaller on-premises data centre. “Security is a major priority, so we don’t want researchers having servers under their own desks. We like to keep all applications inside the data centre, both to safeguard against unauthorised access – as universities are desirable targets for hackers – and for ease of management and efficiency,” says Tom Cannon, Enterprise Architecture Manager at UCD. Challenges: ageing cooling infrastructure presents downtime threat and reputational damage Resilience is a key priority for UCD’s IT services. Also, with its campus located close to Dublin’s city centre, real estate is at a premium. There are continuing demands for more student facilities and consequently the need to make more efficient use of space by support services, such as IT. Finally, there is a pervasive need to maintain services as cost-effectively as possible and to minimise environmental impact in keeping with a general commitment to sustainability. As part of a major strategic development of the university’s facilities, called Future Campus, the main Daedalus data centre was required to free up some outdoor space taken up by a mechanical plant and make it available for use by another department. The IT services organisation took this opportunity to revise the data centre cooling architecture to make it more energy and space efficient, as well as more resilient and scalable.

The overall effects of installing the new cooling system are therefore: greater resilience and peace of mind; more efficient use of space for the benefit of the university’s main function of teaching; greater efficiency of IT infrastructure and consequently, a more sustainable operation into the future “When the data centre was originally built, we had a large number of HPC clusters and consequently a high rack power density,” comments Cannon. “At the time we deployed a chilled-water cooling system as it was the best solution for such a load. However, as the technology of the IT equipment has advanced to provide higher processing capacity per server, the cooling requirement has reduced considerably even though the HPC clusters have greatly increased in computational power.” One challenge with the chilled water system was that it relied upon a single set of pipes to supply the necessary coolant, which therefore represented a single point of failure. Any issues encountered with the pipework, such as leaks, could therefore threaten the entire data centre with downtime. This could create problems at any time in the calendar, however, were it to occur at critical moments, such as during exams or registration, it would have a big impact on the university community. Reputational damage, both internally and externally, would also be significant. Solution: migration to Schneider Electric Uniflair InRow DX cooling solution resolves reliability, scalability and space constraints UCD IT services took the opportunity presented by the Future Campus project to replace the existing chilled water-based cooling system with a new solution utilising Schneider Electric’s Uniflair InRow Direct Expansion (DX) technology, utilising a refrigerant vapour expansion and compression cycle. The condensing elements have been located on the roof of the data centre, conveniently freeing up significant ground space on the site formerly used for a cooling plant. Following on from an open tender, UCD selected Total Power Solutions, a Schneider Electric Elite Partner, to deliver the cooling update project. Total Power Solutions had pre-

viously carried out several power and cooling infrastructure installations and upgrades on the campus and is considered a trusted supplier to the university. Working together with Schneider Electric, Total Power Solutions was responsible for the precise design of an optimum solution to meet the data centre’s needs and its integration into the existing infrastructure. A major consideration was to minimise disruption to the data centre layout, keeping in place the Schneider Electric EcoStruxure Row Data Centre System (formerly called a Hot Aisle Containment Solution, or HACS). The containment solution is a valued component of the physical infrastructure, ensuring efficient thermal management of the IT equipment and maximising the efficiency of the cooling effort by minimising the mixing of the cooled supply air and hot return – or exhaust – airstream. The new cooling system provides a highly efficient, close-coupled approach which is particularly suited to high density loads. Each InRow DX unit draws air directly from the hot aisle, taking advantage of higher heat transfer efficiency and discharges room-temperature air directly in front of the cooling load. Placing the unit in the row yields 100% sensible capacity and significantly reduces the need for humidification. Cooling efficiency is a critical requirement for operating a low PUE data centre, but the most obvious benefit of the upgraded cooling system is the built-in resilience afforded by the 10 independent DX cooling units. No longer is there a single point of failure; there is currently sufficient redundancy in the system that if one of the units fails, the others can take up the slack and continue delivering cooling with no impairment of the computing equipment in the data centre. “We calculated that we might just have managed with eight separate cooling units,” says Cannon, “but we wanted the additional resilience and fault tolerance that using 10 units gave us.”

Q4 2022 www.datacentrereview.com 9


SPONSORED FEATURE

Additional benefits of the new solution include its efficiency – the system is now sized according to the IT load and avoids the overcooling of the data centre, both to reduce energy use and improve its PUE. In addition, the new cooling system is scalable according to the potential requirement to add further HPC clusters or accommodate innovations in IT, such as the introduction of increasingly powerful but power-hungry CPUs and GPUs. “We designed the system to allow for the addition of four more cooling units if we need them in the future,” says Cannon. “All of the power and piping needed is already in place, so it will be a simple matter to scale up when that becomes necessary.” Implementation: upgrading a live environment at UCD It was essential while installing the new system that the data centre kept running as normal and that there was no downtime. The IT department and Total Power Solutions adopted

10 www.datacentrereview.com Q4 2022

what Tom Cannon calls a ‘Lego block’ approach; first to consolidate some of the existing servers into fewer racks and then to move the new cooling elements into the freed-up space. The existing chilled-water system continued to function while the new DX-based system was installed, commissioned and tested. Finally, the obsolete cooling equipment was decommissioned and removed. Despite the fact that the project was implemented at the height of the Covid pandemic with all the restrictions on movement and the negative implications for global supply chains, the project ran to schedule and the new equipment was successfully installed and implemented without any disruption to IT services at UCD. Results: a cooling boost for assured IT services and space freed for increased student facilities The new cooling equipment has resulted in an inherently more resilient data centre with

ample redundancy to ensure reliable ongoing delivery of all hosted IT services in the event that one of the cooling units fails. It has also freed up much valuable real estate that the university can deploy for other purposes. As an example, the building housing the data centre is also home to an Applied Languages department. “They can be in the same building because the noise levels of the new DX system is so much lower than the chilled-water solution,” Cannon says. “That is clearly an important issue for that department, but the DX condensers on the roof are so quiet you can’t tell that they are there. It’s a much more efficient use of space.” With greater virtualisation of servers, the overall power demand for the data centre has been dropping steadily over the years. “We have gone down from a power rating of 300kW to less than 100kW over the past decade,” Cannon adds. The Daedalus data centre now comprises 300 physical servers but there are a total of 350 virtual servers split over both data centres on campus. To maximise efficiency, the university also uses EcoStruxure IT management software from Schneider Electric, backed up with a remote monitoring service that keeps an eye on all aspects of the data centre’s key infrastructure and alerts IT services if any issues are detected. The increasing virtualisation has seen the Power Usage Effectiveness (PUE) ratio of the data centre drop steadily over the years. PUE is the ratio of total power consumption to the power used by the IT equipment only and is a well understood metric for electrical efficiency. The closer to 1.0 the PUE rating, the better. “Our initial indications are that we have managed to improve PUE from an average of 1.42 to 1.37,” says Cannon. “However, we’re probably overcooling the data centre load currently, as the new cooling infrastructure settles. Once that’s happened, we’re confident that we can raise temperature set points in the space and optimise the environment in order to make the system more energy efficient, lower the PUE and get the benefit of lower cost of operations. “The overall effects of installing the new cooling system are therefore: greater resilience and peace of mind; more efficient use of space for the benefit of the university’s main function of teaching; greater efficiency of IT infrastructure and consequently a more sustainable operation into the future.”



AI & MACHINE LEARNING

Collaborative intelligence

12 www.datacentrereview.com Q4 2022


AI & MACHINE LEARNING

Dustin Rigg Hillard, Chief Technology Officer at eSentire, asks – when does AI actually provide value when it comes to cyber security?

A

rtificial intelligence (AI) has been seen as having great potential since 1956. Based on computing algorithms learning from real-world data, AI and machine learning have been developed to help automate tasks that are predictable and repeatable. AI has been deployed to improve activities like customer service and sales, by helping people carry out their roles more effectively and by recommending actions to take based on previous experiences. AI has a rapidly growing role in improving security for business processes and IT infrastructure. According to research conducted by KPMG in 2021, 93% of financial services business leaders are confident in the ability of AI to help them detect and defeat fraud. According to IBM research in association with AQPC, 64% of companies today are using AI in some shape or form for their security capabilities, while 29% are planning their implementation. IBM also found that security users were one of the most common groups using AI in its Global AI Adoption survey for 2022, at 26%. At the same time, problems around data security held back AI adoption for around 20% of companies. However, all this emphasis on AI for security can be misleading. While AI and machine learning techniques are materially improving fraud detection and threat detection, caution is warranted about all the hype and expectations that come with AI.

Using AI effectively within your IT security processes requires balancing the accuracy of predictions with how much human effort can be devoted to investigation of potential threats Keeping a realistic view in mind When large volumes of consistent data are available, AI is best positioned for success. Learning based on large amounts of malicious and benign files, AI can detect and flag new examples that have the same characteristics. These automated detections exceed the capabilities for previous approaches that relied on human actions or rules-based systems because they can identify statistical patterns across billions of examples that humans are unable to analyse at scale. Beyond identifying malicious files, AI models can now replicate human intelligence in detecting sophisticated attacks that utilise obfuscated scripts and existing IT tooling. This has been achieved by learning from

large volumes of human investigations into security events and incidents, identifying the specific usage traits leveraged by novel attacks that would otherwise go unnoticed in the noise of normal IT activity. These AI-based approaches can identify rare anomalies that indicate the actions of a sophisticated attack. However, the emphasis here is ‘can’. These models can also generate too many false positives and be confused by normal variations in activity across the organisation’s IT infrastructure and applications. This rash of alerts can then limit the ability of the human team to act because they have insufficient time to investigate all the anomalous behaviours.

The impact of AI in security will depend on how well systems incorporate new context and examples provided by expert human analysts Best of both worlds Using AI effectively within your IT security processes requires balancing the accuracy of predictions with how much human effort can be devoted to investigation of potential threats. When AI has enough data and context to achieve near perfect accuracy, as with malicious file detections, the predictions can be incorporated into automated processes that stop threats without any human intervention. When AI is able to detect unusual and malicious behaviours, but still requires human investigation to determine true threats, the best approach is to ensure the investigative efforts are providing the desired value to your security program. Implementing behavioural detection is a necessary step to keep up with the rapid innovation of attackers who are constantly working to evade detection. Putting AI-powered solutions in place can help security teams to process large volumes of data and prioritise investigations of potential threats. To achieve this, teams have to develop a level of maturity in their processes around automation and investigation, and how items are handed off between AI-based systems and human analysts. The feedback cycle between automated detections and human analysis is critical, and AI systems become more impactful if they are able to continuously learn. The reality today is that humans are still at the heart of any complicated cyberattack – humans will set up the attack, and humans will carry out the defensive actions and prevent any breach. The impact of AI in security will depend on how well systems incorporate new context and examples provided by expert human analysts. Attackers are certainly becoming more creative in their approaches and tactics, finding new vulnerabilities and using automation in their attacks to amplify their capabilities with AI. However, they are only able to carry out their attacks based on what they discover. For defenders, understanding the sheer volume of data in their own environments can provide them with a better picture of what good looks like, helping them spot and stop attackers that deviate from expected behaviour. The true value of artificial intelligence in security will be based on how well it amplifies the ability of security teams to detect and defeat attackers.

Q4 2022 www.datacentrereview.com 13


DATA CENTRE DESIGN & OPERATIONS

Line of duty Louis McGarry, Sales and Marketing Director at Centiel UK, explores what data centre operators should consider to ensure their critical power protection is ready, whatever the future holds.

H

indsight is a wonderful thing. This is why it is important to gather as much information as possible before selecting the products that will protect a critical load to avoid implications further down the line. Data centres have a duty to ensure that their clients are protected by the highest level of resilience and that equipment is future proofed. But it’s not that easy. Just like the majority of industries, data centres are under pressure to minimise costs. However, making short-term savings on CapEx can mean OpEx ends up being far more expensive. This can also result in purchasing products that do not offer the flexibility required to support the long-term plans of the data centre. It pays to spend more time considering options from the outset for the best outcome and to reduce the overall Total Cost of Ownership (TCO).

Any data centre manager exercising a ‘duty of care’ for the future should consider true modular UPS technology due to their advantages over stand-alone systems Avoid system oversizing It’s not uncommon for UPS installations to be designed and configured for a much greater load than is actually required. However, a system which is too large wastes energy, is inefficient and costly to run. It may also cost more than necessary to service and maintain due to its size. Therefore, adopting a right-sized approach means that data centres can literally ‘pay-as-they-grow’, which will pay dividends over the long-term. True modular UPS systems offer some of the highest levels of availability and resilience. The latest generation of true modular technology is >97% efficient, which can dramatically reduce TCO. The main advantage

14 www.datacentrereview.com Q4 2022

for data centres is the flexibility and adaptability of the product. Installing a true modular UPS with the ability to use a fully rated frame or empty carcass provides the option to add the required number of modules to suit the actual load, plus more only when needed. This right-sized approach avoids the need for oversizing, reducing the initial investment of CapEx and minimising running costs. It means investment can still be made in a premium solution and costs controlled at the same time. Any data centre manager exercising a ‘duty of care’ for the future should consider true modular UPS technology due to their advantages over standalone systems.

I’ve said it before and I will say it again, adopting the approach: ‘let’s do what we’ve always done’ is not best practice Plan for the future I’ve said it before and I will say it again, adopting the approach: ‘let’s do what we’ve always done’ is not the best practice. It may work but often results in oversized and inefficient UPS systems. Instead, work with your UPS’ historical data to review your load profile. Understanding this will help to size your UPS accurately and create the most efficient and cost-effective system for the future. All the information is available. Once you have established usage there are three areas of focus; adequately sizing the UPS to the load is the most important consideration.


DATA CENTRE DESIGN & OPERATIONS

Secondly is to ensure that the UPS is flexible enough to adapt and seamlessly grow with any future load increases. Finally, see if it is possible to utilise the existing infrastructure, avoiding costly changes. For example: the existing infrastructure may be designed for a 500kW UPS system, but the actual load is only 200kW. A good fit would be to install a 500kW modular frame and either 4 x 50kW modules (200kW N) or 5 x 50kW modules (200kW N+1). This ensures headroom for future growth, reutilisation of infrastructure and optimal performance for the load. Don’t forget about preventative maintenance When buying a new UPS, the serviceability of any product should also be considered. Can preventative maintenance and the replacement of essential components be completed without putting the load at risk? True modular UPS systems designed with safe-hot-swappable components may offer a solution. By utilising this technology, data centres have the ability to purchase a system that removes the need to switch to external bypass or reduce the resilience during maintenance, protecting the load indefinitely. As with any purchase of equipment, a UPS comes with a warranty, this guarantees that the data centre is fully covered for 24 months. The warranty covers electrical faults and component failures. However, the specifics of the warranty may vary from manufacturer to manufacturer. To ensure that this warranty remains valid, data centres must adhere to the manufacturer’s maintenance guidelines. A standard requirement for most warranties is a minimum of two preventative maintenance visits per year conducted by the manufacturer’s authorised service engineers. These approved engineers can guarantee

access to the technical support from the people who designed and built the UPS, plus spare parts and firmware updates. Re-evaluate autonomy Over the past decade we have seen the use of Lithium-ion (Li-ion) in the UPS industry become more popular. This advancement enables the reduction in footprint, an increase in operating temperatures and longer operating life. However, Li-ion has yet to replace the use of the tried and tested option of VRLA batteries fully. When looking at battery options, it isn’t just technology that can help data centres reduce the overall total cost of ownership. Evaluating the required autonomy based on individual power protection plans is essential, not only to ensure there is adequate time to enact the plan but to also prevent oversizing of battery systems. Ask: ‘how long will it take for the generators to be available?’ and in the worst-case scenario: ‘how long will it actually take to perform a graceful shutdown?’ This information can help answer the question: ‘do we really need to purchase this much lead?’ This in turn saves on budget. Less lead is better for the environment too. There will always be a drive for data centres to try and minimise upfront costs. However, without careful analysis it could cost the data centre down the line. Data centre managers who are committed to minimising TCO and maximising availability will do well to invest the time to understand the facts, figures and data to help them make the most informed decisions from the outset. Inviting manufacturers into the discussion earlier to pool knowledge resources, ideas and come up with workable options will also contribute to savings over the long term.

Q4 2022 www.datacentrereview.com 15


SPONSORED FEATURE

Thinking solutions

Reflex takes a ‘Thinking Solutions’ approach to providing robust and reliable results for critical systems, explains Dave Cannon – Technical Manager.

R

eflex Winkelmann GmbH is considered one of the leading solutions providers for the smooth operation of water-based systems in the building services sector and is increasingly becoming the preferred choice for pressure maintenance and degassing solutions for systems requiring a high level of resilience. This success has been established over the past 50+ years in the pressure maintenance

16 www.datacentrereview.com Q4 2022

market, adapting to changing market requirements and customer expectations in the global services sector. Taking this into account, Reflex Winkelmann GmbH strives to provide user friendly, flexible and robust solutions which system design engineers, installers and maintenance companies recognise. Considered the ‘lung’ of a sealed system, the correct selection of pressure maintenance equipment is essential for

smooth operation and must demonstrate reliability and resilience (N+1) in critical systems such as data centre cooling applications. When larger systems are being considered (+2 MW of cooling), it is generally agreed that dynamic pressure maintenance solutions are preferred over traditional ‘static’ expansion vessels, and this is where Reflex has excelled. Over time, the Reflex dynamic pressure maintenance solution has evolved, resulting in


SPONSORED FEATURE

the current range and, applying the principle of flexible design, can tailor the solution depending on specific system requirements. Using a patented method of control, the Reflex ‘Variomat’ range of dynamic pressure maintenance design can maintain system pressure to +/- 0.2 Bar regardless of the system status. This is achieved by the ‘Control Touch’ controller modulating multi-stage pumps and self-cleaning motorised ball valves resulting in incredibly accurate pressure maintenance. When pressure maintenance in critical systems (e.g. data centres) is being considered, reliability and resilience is paramount. As previously mentioned, the Variomat range has evolved over the years into an incredibly reliable solution using the best quality materials of construction, including duty/standby or duty/assist multi-stage pump and self-cleaning motorised ball valves preferred over solenoid valves, minimising the risk of failure. To give the system further integrity, software within the Control Touch module can be easily configured to link other Reflex Variomat modules via master/slave (N+1) with capabilities of linking up to 10 separate units. Under this regime, the expansion vessel content across all units remain balanced resulting in stable operation without multiple units conflicting with each other. When addressing the control protocols, a variety of options are requested depending on the specific requirements of the installation and, as these are modular units, almost all protocols can be accommodated. These range from simple volt free contacts (I/O module) to BACnet and Modbus options available, to name a few. This permits remote ‘real-time’ system monitoring and flags any warnings before systems go into full alarm condition. Looking into the future, additional BMS protocols could be considered without the need for product redesign. Moving onto the storage of expanded water, the Variomat (and Giga) range of vessels are constructed in accordance with the PED even though they are considered ‘pressure-less’. The result is a superior quality vessel design giving the end-user further confidence. The control (aka basic) vessel includes an oil load cell which communicates with the Variomat Control Touch module, with the ability of multiple secondary vessels to be installed in series allowing pressure maintenance of larger systems whilst adapting to site restrictions.

As the Variomat control module modulates, water that spills into the vessel(s) on system expansion reduces from system pressure to atmospheric pressure resulting in an extraordinarily long service life of the vessel bladders and reduced service activities as there are no pre-charge checks apart from any small capacity balancing vessels. In addition to this, natural atmospheric degassing is carried out of the spilled water as a by-product which will further improve the quality of systems operating >70°C. Taking a closer look at resilience, configuration of equipment is critical to eliminate or at least minimise any downtime. Although unlikely these could be caused from several

220m3, bespoke units are regularly produced to accommodate larger systems such as PCW systems in data centres. Although the principal purpose of the Reflex Servitec vacuum degasser is to improve system efficiency by removing all air in the pumped media, additional benefits can be realised including smoother laminar flow, and reduction of vibration as the hydraulic and thermal transfer properties of the system are optimised. The result of this enables system balancing to be undertaken with a smoother system. Applying the same principle to the Servitec range design as that of the Variomat mentioned earlier, Reflex Control Touch or Basic control modules are used which helps with the consisten-

Using a patented method of control, the Reflex ‘Variomat’ range of dynamic pressure maintenance design can maintain system pressure to +/- 0.2 Bar regardless of the system status factors, including power supply issues or product failure. To address this, we can look at several initiatives including: • Master/slave equipment can be configured on different phases. Power supply issues are extremely rare within data centres, and if they do occur, they will generally be localised, and confined to one phase of the three-phase supply. If the master unit is fed by L1, and the slave by L2, in the event of the loss of a single phase, at least one of the units will remain in service. Once power is restored, any expansion vessel imbalance between master and slave modules adjusts the expansion levels automatically back to equilibrium. • If a unit needs to be taken out of commission, the Reflex Control Touch module will recognise this and, assuming the system is designed with master/slave, will automatically assign responsibility to the unit(s) still in service resulting in consistent operation until normal service is resumed. Over recent years, there has been an appetite to improve the efficiency of systems and vacuum degassing is being preferred on critical systems. The Reflex Servitec range of vacuum degassers developed over the past 30 years are now being used extensively to improve the system efficiency by up to 10.6% and, although standard units are designed for systems up to

cy of components and understanding for on-site engineers. Premium pumps and self-cleaning motorised ball valves are also used in the design guaranteeing the resilience of the unit in service and when required, can work alongside master/ slave pressure maintenance solutions. When considering the correct selection of system pressure maintenance and degassing for critical systems, it is essential that this is done with not only the current requirements, but with ‘day ultimate’ in mind. Due to the dynamic nature of the Reflex design, almost all requirements can be accommodated for working with specialists in the field within Reflex. Combining the Reflex Selection Program (RSP) and applying in-house specialist knowledge in data centre applications, the optimum solution can be found which often exceeds the customers’ expectations.

Q4 2022 www.datacentrereview.com 17


DATA CENTRE DESIGN & OPERATIONS

Getting specialised

18 www.datacentrereview.com Q4 2022


DATA CENTRE DESIGN & OPERATIONS

Jakub Wolski, Data Centre Strategy & Business Development Leader at Trend, discusses how integrating specialised tools and skills can help achieve sustainability goals. ccording to the International Energy Association, data centres used a total of 200–250 TWh of electricity in 2020, with this number steadily growing in-line with our reliance on digital technologies. In 2021, data centres outpaced the energy consumption of rural homes in Ireland, accounting for 14% of electricity usage compared to the 12% from households – something that attracted the attention of Ireland’s Commission for Regulation of Utilities. The high volume of electricity consumption is due to the energy intensity of data centre operations. Data centres use power in several ways, with the most common being the need to run vast amounts of IT equipment and the need to cool the equipment down so that it can function properly. On average, servers and cooling systems each individually account for 43% of direct electricity use in data centres, followed by storage drives and network devices. With both functions being imperative for data centres to function and run smoothly, but with each creating a substantial amount of carbon emissions, data centre operators are finding it increasingly difficult to effectively address this issue. With the total number of global data centre transactions increasing by 64% since 2020, it is imperative that the industry reduces its carbon footprint whilst also meeting demand.

A

Many approaches to sustainability seem to involve carbon offsetting, particularly by way of purchasing carbon credits Decreasing its carbon footprint With many large organisations making bold statements by pledging to achieve carbon neutrality by 2050, the data centre industry needs to follow suit to help meet and support these goals. In fact, with the adoption of the Climate Neutral Data Centre Pact, data centre operators are clearly committed to making Europe climate neutral by 2050. Nonetheless, many approaches to sustainability seem to involve carbon offsetting, particularly by way of purchasing carbon credits. Whilst there are many carbon offsetting initiatives worldwide, this is ultimately a costly shortterm solution that doesn’t directly reduce emissions. As such, it is important that the physical impacts of data centres are addressed and that steps are taken to cut the electricity consumed during operations. Through addressing electrical infrastructure, water consumption in cooling systems, and reducing use of any diesel or gas-run generators, data

centre operators can start to directly reduce the emissions and environmental impacts. By reducing the carbon footprint of data centres, rather than offsetting, the data centre industry will be far more likely to achieve sustainability goals. A practical solution One-off efficiency improvements, such as modernising cooling systems and power supplies, may boast a substantial reduction in carbon emissions, but they do not lead to long-term change. To control energy usage and emissions on an ongoing basis, data centres need fully integrated control systems, such as a building energy management system (BEMS) or an electrical power management system (EPMS). These can enable data centre operators to have a single data-driven platform that collects, aggregates and presents mission-critical information in a variety of easy-to-use formats. These systems are most effective if they can provide real-time information to operators, as this enables decisions to be made on how best to optimise the operation of each facility and reduce overall energy consumption in response to changing market and operational conditions.

To control energy usage and emissions on an ongoing basis, data centres need fully integrated control systems, such as a building energy management system (BEMS) or an electrical power management system (EPMS) These systems can provide a single user interface, delivering real-time, clear information, communication and data processing for more reliable building automation and supervision. Gaining insights into a system’s performance capabilities typically makes it easier to identify inefficiencies and reduce energy waste, such as excessive water consumption. Upgrades to existing systems, especially those reaching obsolescence, can additionally help reduce emissions whilst also delivering valuable savings. It is also important to work with partners who can reduce downtime, spotting issues early before they become system-level problems. New technology and regulations mean data centres are subject to constant changes, so it is vital to work with partners who understand the latest changes and how emerging technology can benefit the industry. By adopting this approach and identifying ways to prioritise decisions that help improve sustainability, operators can cut costs whilst achieving goals much more quickly. The goal of data centres achieving carbon neutrality is no easy task. Nonetheless, if data centre operators want to improve their carbon emissions, it is perhaps time they consider integrated systems and experienced partners. Given the increasing demand the data centre industry faces, reducing their carbon footprint at the source can help the industry’s total greenhouse gas emissions to decrease.

Q4 2022 www.datacentrereview.com 19


DATA CENTRE DESIGN & OPERATIONS

The cost of failure With the risks posed by power outages a growing threat to data centre uptime, Paul Brickman from Crestchic Loadbanks explores why testing backup power systems is more important than ever before.

20 www.datacentrereview.com Q4 2022

T

he latest data from the Uptime Institute suggests that the cost of data centre outages is on the up, with power failure the number one cause. Global energy shortages, climate change and weather patterns, energy transitions, and global economic conditions have combined to increase the chances of power shortages worldwide in the coming months. What’s driving the risk of power outages? There’s no getting away from news of power costs rising exponentially, with homeowners and businesses living under the threat of supply disruptions and record prices. Increased prices for coal and natural gas are impacting power generation, with some generators shutting down as a result. The conflict in Ukraine is not only impacting prices, it is also physically disrupting energy transmission and imports. While geopolitics has a huge influence on the current energy landscape, power supplies are also being impacted by the weather and energy transition. High temperatures throughout the summer contributed to increased demand, while supplies were reduced due to depleted hydro and wind power caused by the heat. Harnessing natural sources of power, combined with peaks and troughs of usage, means that balancing the grid to produce a stable and constant supply can be more challenging, increasing the likelihood of temporary blackouts.


DATA CENTRE DESIGN & OPERATIONS

operate and run efficiently in the case of a power outage, and that the generators can be safely turned off with no interruptions when mains power is restored. Ideally, all generators should be tested at least annually for real-world emergency conditions using a resistive-reactive 0.8pf load bank. In applications where there are multiple gensets, it is best practice to run them in a synchronised state, ideally for eight hours. In this instance, opting for a large multi-megawatt, medium voltage load bank package will facilitate both testing and synchronisation of multi genset systems on a common bus with a lagging power factor.

With costs rising, and energy resilience being hit from all sides, ensuring that backup systems work as they should has never been more critical

Managing risk in an energy-critical environment Uptime Institute data suggests that $1 million+ failures are increasingly common, with the biggest causes of power-related outages being uninterruptible power supply failures, followed by transfer switch (generator/ grid) and generator failures. While no facility is entirely risk-free, taking a preventative approach is significantly less costly than risking an outage. In a mission-critical environment like a data centre, where customers demand 99.99% uptime and above, having a UPS and backup generator in place is non-negotiable. However, with costs rising, and energy resilience being hit from all sides, ensuring that backup systems work as they should has never been more critical. Load bank testing for improved energy resilience Arguably the undersung heroes of the power world, load banks are used to test the reliability of backup power systems to ensure that they work effectively should an outage occur. They do this by creating an electrical load that imitates the load that a generator would use if/when called upon. Wherever there is a generator, there is also a need for a load bank. Using a load bank to commission or regularly test the backup power system not only tests the prime movers and the batteries (UPS), but ensures all other components such as the alternator and the transfer switches are tested too. A load bank test proves that the UPS/generators will start,

The right tools for the job With cost-benefit/risk analysis a relatively easy argument in terms of cap-ex, many operators opt to buy load banks for ongoing and regular use. Their incredibly long shelf life means that they can be permanently installed and provide years of operation. Recent product development means that they can be supplied as a transportable option, making it easy to test multi-genset applications. Load bank rental is also an option, allowing operators to increase their testing capacity or access different types of load banks for different applications. Either way, load banks should come with a warranty, after-market care, and full training and support. A range of applications In a data centre environment, load banks have more than one use. While testing is a key concern in the current environment, load banks can also be used to apply additional load to lightly loaded engines, stabilising the system and helping to avoid wet stacking and generator damage. DC load banks can also be used to test UPS systems for close battery analysis and discharge performance, enabling operators to identify and rectify potential weaknesses. Load banks have earned their name as a preventative maintenance tool. However, they can also be used during the commissioning phase. Load banks should be used to fully test systems once installed, ensuring that the impact of lifting, moving, and installing hasn’t thrown factory tests off target. Using resistive-only load banks to heat load test the air conditioning systems at the commissioning stage is also an important step to validate that they can cope with the heat output of the servers. Put simply, implementing a load bank testing regime as part of your energy resilience strategy could mitigate the huge and costly risks identified in this year’s Uptime Institute research. With costs running into millions, and customer uptime expectations also on the rise, the business case can almost write itself.

Q4 2022 www.datacentrereview.com 21


DATA CENTRE DESIGN & OPERATIONS

Keeping the lights on

Jason Yates, Riello UPS Technical Services Manager, outlines how data centres could help secure energy supplies during what promises to be a challenging few months ahead by deploying their standby power systems.

22 www.datacentrereview.com Q4 2022

s the clocks go back and winter approaches, there are concerns that Britain is about to be hit by an upcoming energy crisis that could even lead to rolling blackouts. While National Grid ESO’s latest Winter Outlook forecast suggests there’ll be enough power to meet demand over the coming months, the margins will be tight. And if the Europe-wide shortage of gas intensifies, the electricity system operator’s worst-case scenario would see households and other energy consumers facing three-hour blackouts similar to those last seen during the notorious Three Day Week of 1974. To mitigate against such scenes, National Grid is deploying mitigation measures such as securing 2 GW of additional power from coal-fired plants, enough energy to supply 600,000 homes. From 1 November, it also introduced a Demand Flexibility Service which will incentivise both domestic and commercial energy users to reduce-peak time consumption, a scheme it believes will reduce demand by up to another 2 GW of electricity. Households will receive around £10 a day to use power-hungry appliances such as dishwashers or tumble dryers overnight, while companies will also be paid to either reduce demand or switch to battery power during peak times. By their very nature, data centres are energy-intensive operations. And the industry is only too aware of the need for sustainability. So is there anything operators can do to help the grid cope with these upcoming demand and supply pressures?

A


DATA CENTRE DESIGN & OPERATIONS

One possibility lies in the uninterruptible power supplies (UPS) and batteries data centres use as their ultimate insurance against damaging downtime. Thanks to rapid advances in communications software and protocols, as well as improved battery technologies such as cycle-proof or Lithium-ion cells, many modern data centre UPSs are compatible with today’s smart energy grids. This offers the opportunity to transform the UPS from a reactive piece of equipment into a dynamic ‘virtual power plant’. Smart grid-ready UPS communicate with local power networks. Depending on the real-time conditions, they either push stored battery power back into the grid or draw electricity from it as and when required to help balance supply with demand and maintain a stable frequency.

and control system that allows for two-way communication with the grid. Such monitoring also aids with predictive maintenance. The final element is a route-to-market contract so that the data centre can participate in the energy market without taking on any of the risks. In practice, the battery is split into two distinct parts. The first is a backup segment comprising roughly 30% of the total capacity. This is controlled by the UPS and can only be used to support the critical load if there’s a mains failure, so you’re not compromising on availability. The other 70% makes up the ‘commercial’ segment, which RWE can deploy to grid balancing schemes such as Firm Frequency Response, which helps National Grid maintain a safe grid frequency within one hertz of 50 Hz.

Peak shaving potential The first area where a data centre’s UPS can potentially help the electricity network is peak shaving. This concept basically uses the UPS batteries to effectively limit how much power a site draws from the mains supply. So in practice, if the load on the UPS output goes above a set level, the UPS takes a proportion of the load from the mains and the rest comes from the battery set. Peak shaving comes in various formats. The first option – static – is straightforward in that the UPS has a fixed setting and simply peak shaves to that limit. Type two is user-controlled peak shaving, where the data centre operator can reduce the input mains power as and when required by sending commands to the UPS. You can do this either with volt-free contacts or communications protocols such as Modbus. If your site has a weak power source or a reliance on generator sets, the third option is called impact load buffering, which in effect sees the energy stored in the batteries slow down the incoming mains supply. Finally, we have dynamic peak shaving, the most common application. As the name suggests, it works hand-in-hand with the real-time conditions on-site. Take a data centre with a contractual limit of 1 MW mains electricity supply. Your typical load ranges between 500-900 kW, while your critical load is another 300 kW. At peak time, you might have a maximum load of 1.2 MW, which is obviously in breach of your contractual obligations. So when this happens, the UPS automatically pushes the energy stored in its batteries to reduce the power required from the mains. During periods when loads are lower, the UPS recharges the batteries for future use.

Thanks to rapid advances in communications software and protocols, as well as improved battery technologies such as cycle-proof or Lithium-ion cells, many modern data centre UPSs are compatible with today’s smart energy grids

Stabilising grid frequency Another area where a smart grid-ready UPS can play a demand side response role is helping to maintain a stable grid frequency. This is something Riello UPS has experience with thanks to our Master+ solution, which we developed in partnership with RWE Supply & Trading specifically for data centres and other large-scale energy users. It brings together a high-efficiency, smart grid UPS using cycle-proof lead-acid batteries, although it can also work with Lithium-ion batteries too. Because of a very compact battery arrangement, the solution provides up to four times the usable capacity whilst requiring just a 20% increase in footprint. Crucially, this offers ample battery capacity for both emergency backup and for commercialisation. The solution also incorporates a highly secure, integrated monitoring

If the frequency goes above 50 Hz, the UPS takes power from the grid into its batteries to pull the frequency down. And vice versa, when the frequency drops, the UPS pushes stored power from the batteries back into the grid. During system operation, the typical state of charge ranges between 60-70%. So in the event of a mains failure, not only do you have the power in the backup segment, but there’s also whatever’s left in the commercial part to top up your autonomy. In return for RWE gaining the usage rights for the commercial element of the batteries, data centre operators reap the rewards in terms of a significantly discounted premium battery system, extended backup time, 24/7 monitoring, lower maintenance costs, reduced grid tariffs (potentially worth up to £6,000 per MW per year), and additional revenue-generating opportunities. Securing supplies for the greater good Just a couple of months ago, Microsoft revealed that its entire data centre portfolio across Ireland would use Lithium-ion batteries to push stored power back into the grid by the end of 2022. This will reduce the reliance on gas and coal-fired plants to maintain vital spinning reserves and help to significantly cut the Irish energy sector’s carbon emissions. With the ongoing advances in UPS, battery, and smart grid technologies, other data centre operators can follow in Microsoft’s footsteps and harness their standby power systems for the greater good. Not only are there hard-headed financial reasons to do so (i.e. lower energy costs and reduced tariffs), but in such uncertain times, there’s the wider benefit of helping secure the nation’s energy supplies this winter and beyond.

Q4 2022 www.datacentrereview.com 23


COOLING

A new approach Data centre water use is once again under the microscope. Adam Yarrington, Business Unit Leader, Director – Data Centres at Airedale by Modine, asks – are you ready to change course?

24 www.datacentrereview.com Q4 2022

his summer’s extreme weather conditions, with high temperatures and prolonged dry spells leading to widespread droughts across the UK and Europe, has brought into sharp focus the issues we face as a country, and a planet, as a result of climate change. Climate change, industrialisation and digitisation are conflating megatrends that demand some form of balance if we are to continue with the development of our species without destroying our environments. Fortunately, we are starting to see decisive action from many industries that are taking steps to reduce their carbon footprint. This collective movement is really gathering pace, nowhere more so than within the data centre industry where operators, suppliers and designers work tirelessly to improve the efficiency and the sustainability of their sites. It is no secret that the rapid growth in the data centre industry has brought with it some environmental challenges, particularly water scarcity and extreme temperatures. With almost half of Europe under official drought warning conditions this summer (2022), data centre cooling systems are feeling the heat, both literally and figuratively. System providers are pondering three huge challenges:

T


COOLING

• How do we cope with increasing peak ambient conditions? • How do we reduce both energy and water use? • How do we maintain 24/7/365 resilience in an industry demanding we do more with less? In this article we look at the journey data centre cooling has been on and where it should go next, in order to keep the internet cool, without warming the planet. Air-side optimisation Around 10 years ago, the optimisation of air temperatures was introduced as the latest way for data centres to increase efficiencies. At the time, many data centres ran at 20°C to 22°C. However, as server technology advanced, data centres were able to run at higher temperatures which reduced the cooling requirement and provided more opportunity to utilise free-cooling. Adiabatic cooling systems In recent years, air-side optimisation has been built on with the introduction of adiabatic cooling systems. This technique incorporates both evaporation and air cooling into a single system. The evaporation of water, usually in the form of a mist or spray, is used to pre-cool the ambient air to within a few degrees of the wet bulb, allowing cooler and more efficient operation. Whilst the use of spray or mist means water use is significantly lower than with more traditional evaporative systems, a conservative water usage estimate for a modern data centre employing an adiabatic cooling system would still be 500,000 litres per 1MW per annum. As data centres grow larger, this becomes a real concern, particularly in regions where water shortages have been identified as a threat. This water has to be stored and treated, which increases capital costs and, as with any mechanical equipment exposed to continuous water contact, the cooling plant has been seen to suffer from increased degradation, putting strain on OPEX costs too. Water-side optimisation Having recognised the need for cooling systems that provide something close to the efficiencies that can be achieved with adiabatic cooling, but with a more sensitive approach to water conservation, Airedale developed an approach to data centre cooling that takes the philosophy behind air side optimisation and evolves it further. Airedale calls this water-side optimisation. The philosophy of water-side optimisation is based on taking an optimised air environment and looking at what other variables can be adjusted in order to deliver more free-cooling. Assuming that the air within the data centre white space stays at the same temperature, the next step was to reduce the approach temperature whilst opening the difference between water supply and water return. Implementing innovations within the plant equipment means the supply and return air remain as before, but supply and return water temperatures are higher, thus the approach temperature is reduced. We design in a fixed temperature difference of 12°C on the air side with the fluid side being opened out to 10°C and the approach temperature closing from 6°C to 4°C. The system features that deliver these variables are: • Higher water temperatures, meaning less mechanical cooling • Pressure sensors at aisle level, with fans controlled to a fixed pressure output

• Deep-row chilled water coils and simplified air paths in CRAHs • Ducted hot air return • Free cooling chiller with large free cooling coil • Holistic controls system (not a BMS) delivering constant dynamic supervision of the whole system. Free-cooling chillers are matched to large surface area chilled water coils in either indoor CRAC units or fan walls. The air path is simplified using hot aisle containment, creating a pressure differential that draws cool air through the servers and out of the white space via ducts and back to the air conditioning plant via a common plenum. The air is introduced directly to the space via side wall diffusion, minimising air side pressure drops. The benefits of this are: • Less mechanical cooling meaning more efficient chiller operation • Lower fan speeds meaning more efficient indoor unit operation • Lower pump power meaning more efficient water transfer • Large coil surface leads to increased cooling for less footprint (more cooling capacity per metre).

With almost half of Europe under official drought warning conditions this summer (2022), data centre cooling systems are feeling the heat This is all managed with an intelligent controls platform that monitors fluctuating demand within the white space and dynamically operates the system at its most efficient operating point. Based on average temperatures for London, an extra 2°C creates many more hours of free cooling. 14% more free cooling (59% in total) can be achieved with water-side optimisation, with all but 1% of the rest of the year being covered by concurrent cooling (a combination of free cooling and mechanical), giving huge benefits in terms of chiller efficiency. This system could provide free cooling for over 50% of the year in all of Europe’s major data centre hubs (London, Frankfurt, Amsterdam, Paris, Dublin). Climate change As data centre chillers are now being designed for much higher ambient temperatures, companies have to future-proof their designs for Europe’s major growth regions, whilst advancing system designs and free cooling capabilities. One of the issues with high peak temperatures is the power draw on the chiller when it initiates mechanical cooling, so cooling solution providers must find ways to mitigate this. For example, we are researching free cooling coil innovations that will reduce air side pressure drop and assist our peak operating ambient condition, delivering greater airflow across the condenser coils. Cooling solution providers must be committed to providing sustainable cooling solutions for a data centre industry that is prioritising its environmental responsibilities and it is crucial that water usage is not overlooked, in the race for energy efficiency.

Q4 2022 www.datacentrereview.com 25


COOLING

Cool runnings Innovations in data centre cooling are paving the way to a more sustainable future, says Emilia Coverdale, Marketing Project Manager at Asperitas.

T

he global data centre industry is one of the fastest growing technology-driven industries. With emerging digital technologies, such as artificial intelligence (AI), data analytics, virtual reality, IoT and advanced cloud services, one could say the industry is riding a wave of high-performance computing (HPC). However, despite being essential and growing in number, data centres are often seen as inflexible, energy-sapping, environmentally unfriendly, and misaligned to the future needs of most businesses. Data centres are necessary for digital economies, yet they continue to deal with significant challenges regarding high energy and water consumption, floorspace, access to (renewable) energy, cooling costs, complexity and cost of build. With the rising costs and scarcity of energy, these challenges are becoming even more urgent. A global challenge With sustainability being a key goal for the data centre industry, cooling has been brought under sharp scrutiny. On a global scale, 3% of all electricity used in the world goes to data centres, using energy for IT and cooling. Each data centre consistently churns out warm air which is too low in temperature to efficiently distribute and transport. Changing the common cooling medium of air for a much more efficient medium is changing everything for data centres. Innovations in

26 www.datacentrereview.com Q4 2022

cooling address the data centre global challenges, preparing them for a sustainable future of high density and performance hardware and facility efficiency anywhere. One of these innovations, liquid cooling, is gaining more and more interest – led by HPC and servers optimised with a large number of co-processors for AI applications. With reliability and energy efficiency being key for data centres when choosing thermal management systems, choosing liquid cooling, and more specifically immersion, ensures easy integration and proven sustainable performance. When the average global PUE is roughly 1.6, and immersion cooling can reduce that to below 1.1 with ease, there is no better efficiency approach than immersion cooling in today’s market. Immersion cooling technology actively helps data centre operators fulfil their sustainability goals, as using this approach makes it possible to reduce their energy footprint by up to 50%. High expectations for high performance The market for high-performance cooling systems has grown exponentially, and this is exemplified in the latest report from Uptime Institute. Cooling technology is imperative to HPC operations, but choosing the right technology depends on a number of different factors, such as server density, cost, facility power consumption and data centre infrastructure. Previously, data centre operators may have seen immersion as being a tricky choice due to concerns about maintenance, but with offerings that include expert commissioning and servicing, as well as innovative addon solutions, such as service trolleys, maintaining an immersion system is simple and efficient. Immersion cooling can deliver the highest performance and lowest PUE, and with successful deployments, such as the HPC cluster in Amsterdam, immersion is shown to meet cooling demands on both a system and server level. Ultimately, immersion is seen as the most promising technology from


COOLING

a performance perspective, and with space being vital to HPC set ups, adopting this technology helps data centre operators reduce the physical footprint of their cooling infrastructure by up to 80%. Moreover, the ability of the liquid to capture all the IT energy combined with warm water cooling enables solutions ready for heat reuse. Next steps Liquid cooling is changing the way we design, build, and operate data centres, and in the coming years the data centre industry is set to see more of a shift from air cooling to liquid cooling. Other innovations, such as cold plates, for example, are great for static environments, supercomputers, and so on, but single-phase immersion cooling is the logical choice when looking for environmentally-friendly and scalable solutions. With that said, the reality is that this method of cooling is mostly being applied in existing data centres, and retrofitting immersion cooling in a facility which already has the investment and infrastructure in place will mean the impact of immersion cooling will be slower. On the other hand, greenfield data centres that have been designed and optimised for immersion cooling can see their CapEx and OpEx overheads cut by as much as half. With cost being identified as one of the main barriers to the sustainability agenda, data centre operators cannot ignore the longterm benefits. Realistically, a hybrid form of cooling for data centres is the likely scenario moving forward. Colocation providers, the largest part of the market, may not be at the stage where they’re willing to invest big on im-

mersion cooling, but these facilities have their sustainability roadmaps to consider as well as next-generation hardware – CPUs and GPUs – which are set to increase the need for cooling even further. Ultimately, as the importance of data centres to the global economy and society grows, concern about energy use and environmental impacts will grow too. In response, a mostly voluntary compendium of data centre sustainability standards and requirements has been created. Dedicated data centres will be doing their research and figuring out how best to implement more sustainable cooling solutions into their facilities.

In the coming years the data centre industry is set to see more of a shift from air cooling to liquid cooling One of the big drivers for data centre cooling progress is the emphasis on new standards and regulations, particularly in Europe. The Sustainable Digital Infrastructure Alliance (SDIA) continues to do great work in laying down a roadmap for the environmental side of the industry, and immersion cooling can help hit many of the markers on that roadmap. Within the next few years, as more facilities embrace this technology, the industry is set to see more evidence of this.


CRITICAL INSIGHT

VIRTUAL EVENT 22-23 NOVEMBER 2022

Are you ready for Critical Insight 2022? The time is nearly upon us – this month will see the launch of the inaugural Critical Insight, a two-day online event that will explore the crucial issues facing the digital infrastructure industry. Amidst an unprecedented backdrop of rapid growth in demand for compute, we are faced with increasing scrutiny on consumption of power and the need for carbon neutrality – while struggling with an ongoing skills shortage. Remote and hybrid working have introduced new challenges, as the industry grapples with the post-pandemic impact on areas such as cybersecurity and connectivity. How will we navigate the evolving landscape ahead of us? These are the concerns that will be dissected at Critical Insight 2022. Set to take place on 22 and 23 November 2022, the two-day virtual event will provide a platform for key industry leaders to discuss their expert insight and opinion – while also giving attendees the opportunity to put their questions to the experts. Brought to you in association with Data Centre Review, the inaugural event will provide an opportunity for attendees and a diverse line-up of experts to get to grips with new regulations, new innovations and be a part of the conversation that will help to shape the future of the industry. You can view the agenda and register for the event by visiting critical-insight.co.uk If you’d like to find out more about sponsorship opportunities, please get in touch with Sunny or Kelly for more information: Sunny Nehru: sunnyn@sjpbusinessmedia.com / +44 (0) 207 062 2539 Kelly Baker: kellyb@electricalreview.co.uk / +44 (0)207 0622534

www.critical-insight.co.uk 28 www.datacentrereview.com Q4 2022


CRITICAL INSIGHT

Sessions planned for the first Critical Insight will include everything from data centre design and UPS to the cloud and cybersecurity. You can get a glimpse of an abbreviated agenda below; to view it in full, visit critical-insight.co.uk Day 1

Day 2

09:00 – Welcome

09:00 – Welcome Back

Kayleigh Hutchins, Editor, Data Centre Review

Kayleigh Hutchins, Editor, Data Centre Review

09:05 – Keynote: The Data Centre in 2022 & Beyond

09:05 – Keynote: Why Cloud is Moving to the Edge

Ali Moinuddin, Chief Corporate Development Officer & Managing Director, Europe, Uptime Institute

Chris Thorpe, Founder & Chief Executive Officer, Leading Edge Data Centres

09:35 – TBC

09:35 – TBC

Topic to be confirmed by sponsors.

Topic to be confirmed by sponsors.

10:05 – Panel: The Sustainable Data Centre

10:05 – Panel: Designing Data Centres

Moderator: Matt Pullen, Executive Vice President and Managing Director, Europe, CyrusOne Astrid Wynne, Sustainability Lead (Techbuyer) & Chair (DCA Sustainability SIG) John Booth, Reviewer (EU Code of Conduct) & Chair (DCA Energy Efficiency) Stephane Cardot, Director Presales EMEA, Touchdown PR

Moderator: Dai Davis, Technology Lawyer, Institution of Engineering and Technology Nabeel Mahmood, CEO, Mahmood Mark Acton, Independent Data Centre Consultant / DCA Advisory Board Member, Future-tech

10:50 – Gathering Pace on the Road to Net Zero Jon Healy, Operations Director, Keysource

11:20 – 15-minute break 11:35 – The Tech Skills Shortage

10:50 – 5G’s Impact on the Data Centre Mike Hoy, Technology Director, Pulsant

11:20 – 15-minute break 11:35 – The Future of Data Centre Cooling Maikel Bouricius, CCO, Asperitas

Adelle Desouza, DCA Advisory Board & HireHigher Ltd

12:05 – The Evolution of DCIM

12:05 – Mitigating New Cybersecurity Threats

Venessa Moffatt, Business Development Consultant Digital Technologies and Advisory Board, DCA

Paul Gribbon, Cybersecurity Senior Manager, Reliance ACSN

12:35 – Grid-Interactive UPS Systems Brian Clavin, Head of Battery Energy Storage and Sustainable Solutions, Total Data Centre Solutions

13:05 – Total Cost of Ownership of Data Centre UPS Systems Jason Yates, Technical Services Manager, Riello UPS Ltd

13:35 – Panel: Can the Data Centre be Smarter? Ian Shearer, Managing Director APAC & EMEA, Park Place Technologies

12:35 – Powering the Sector With the climate crisis at the forefront of our minds, is there a more sustainable way to power the data centre?

13:05 – Panel: Heads in the Cloud – Changing Trends Moderator: David Terrar, Director & Chair Touchdown PR Tony Grayson, General Manager, Quantum & Compass Datacenters Gary Bennion, Chief Technology & Customer Officer, Cloud M Sam Woodcock, Senior Director of Cloud Strategy and Enablement, 11:11 Systems

14:15 – End of Critical Insight

14:15 – End of Day One Q4 2022 www.datacentrereview.com 29


EDGE COMPUTING

Getting distributed systems right Dom Couldwell, Field CTO at DataStax, explores how to get data consistency across edge and centralised data sets, and the value that can be achieved out of edge data.

30 www.datacentrereview.com Q4 2022


EDGE COMPUTING

dge computing is growing. According to the Linux Foundation State of the Edge 2021 report, up to $800 billion will be spent on new and replacement IT server equipment and edge computing facilities between 2019 and 2028. Gartner expects there to be 18 billion connected things on the internet by 2030, covering enterprise IT and devices in automobiles. Alongside this, Gartner also found the majority of enterprise data will be created outside of on-premises data centres by 2022, based on workloads moving to the cloud and to the edge. To support all these connected devices – and to make them useful – businesses will create new applications that will work at scale. These applications will be ‘edge-native’ and built to work in this environment. For instance, supply chain management teams create data using smart sensors and barcode scanners to monitor product availability at multiple factories and warehouses. Tracking locations for products and vehicles within the supply chain in real-time generates an unprecedented amount of data from the network edge.

E

Fully distributed databases run across multiple locations, with each node treated as an equal These implementations create real-time transactional data that needs to be analysed for insights, so that decisions around cost efficiencies and process optimisation can be made. Without this data, companies are effectively blind to their operational performance. Edge computing and distributed data These edge-native applications will have a curious relationship with data. On the one hand, the application, transaction and processing will be closer to the customer to reduce latency and process in real-time. However, this data will also need to be centralised for analysis and to look for patterns at scale. To manage this consistently, your approach will have to be fully distributed. Most traditional databases will have a primary server that controls how data is processed, with secondary servers that then store and manage that data over time. However, this model does not fit well with how edge-native applications will function – they will need to manage data across edge locations and central data centre or cloud environments, without having the bottleneck of a primary server. Fully distributed databases run across multiple locations, with each node treated as an equal. Groups of nodes in specific locations can interact with each other – for example, to provide resiliency for a service if a data centre goes down, or a connection is lost – but all the nodes will have copies of the data for resiliency. A good example of this is the open source database Apache Cassandra – this database provides geographical fault-tolerance as well as fast performance. For edge-native applications, Cassandra can help organisations store and manage data in the same way across both edge and centre, while keeping data closer to customers to reduce latency. This data can then also be used centrally for analytics at the same time, informing machine

learning models or for running the business. Alongside managing this data at rest, it also has to be moved to where it is needed. This will rely on application event streaming, where new data is recognised and then directed to where it is needed. In the supply chain example, this could be data from a sensor that passes a specific threshold. This data can be streamed to where it is needed – for instance, to provide an alert locally to the team for them to take action, but also to any central analytics service for processing. Supporting edge and data To make this work more efficiently for companies we must support developers around how they want to work. This means supporting the tools that they as developers want to use, such as APIs like GraphQL, REST and gRPC. These APIs will connect the applications to the underlying data that is fuelling the end-user experience. Rather than having to understand how the underlying database works in order to get started, they should be able to use the APIs that they are already familiar with. APIs abstract away the implementation details of the database, which accelerates feature velocity and makes future changes to the underlying data models easier to implement. Supporting multiple cloud services such as AWS, Google Cloud and Microsoft Azure can help with getting closer to where customers and devices are located. At the same time, telecoms providers may have locations that are even closer to where devices are, so any implementation should be able to deploy and run across a mix of different cloud and on-premises data centres. This hybrid model helps to support deployment out to the network edge.

APIs abstract away the implementation details of the database, which accelerates feature velocity and makes future changes to the underlying data models easier to implement Alongside APIs and cloud, you should also look at cloud native technologies, such as Kubernetes. Timelines for organisations to get to the cloud vary, but getting cloud-ready now and modernising existing applications using the best of cloud native technology means you can minimise that transition time. For example, Kubernetes is proving to be popular with telecoms operators as it can host containers and orchestrate them over time. This makes it possible for telecoms companies to host their customers’ application containers and run them effectively. This also makes it easier to run in hybrid environments across on premises, edge and central cloud services. As more companies start to embrace edge computing for their applications, they will have to think about their edge data too. Using a fully distributed database that can run in geographically dispersed locations will be essential.

Q4 2022 www.datacentrereview.com 31


EDGE COMPUTING

Hybrid approach Simon Michie, CTO at Pulsant, explores how SMEs can make hybrid architecture fit for the edge.

32 www.datacentrereview.com Q4 2022

or small or medium-sized enterprises, edge computing is a very significant development. In tandem with the roll-out of high bandwidth 5G connectivity, edge computing will bring advanced SaaS and artificial intelligence (AI) applications within reach of almost every business. It is not 5G alone that enables this – edge computing depends on a network of highly connected edge data centres at strategic locations. By processing data in an edge infrastructure platform close to where businesses generate it, edge means any organisation can access the full range of advanced new SaaS applications from vendors anywhere in the world. This will open a new world of Internet of Things (IoT), machine learning, automation, analytics, content delivery and streaming applications, processing data close to where it is generated for high speed and advanced performance. The use cases cover everything from specialised manufacturing to logistics, fund management and accountancy.

F


EDGE COMPUTING

Resolve current cloud-management challenges first To be fully ready for these advances, however, organisations will need to ensure that edge computing does not complicate what are already becoming unwieldy and increasingly costly hybrid architectures. According to the Flexera 2022 State of the Cloud Report, cloud spending by SMEs increased significantly in 2021 with 53% now spending more than $1.2m – up from 38% in the previous year’s report. Some 80% of organisations now have a hybrid cloud architecture, and 89% use multiple clouds. This extra cloud investment could be allocated more efficiently by placing workloads where they work best (and at the lowest cost), which would lead to major gains in organisational agility. The reality, however, is too often an efficiency-sapping complexity of management and ever-higher costs, especially for SMEs where IT teams are small. As organisations spread their data and workloads across different clouds and their own data centres, management difficulties mount. In the Thales 2021 Data Threat Report, only 24% of organisations responding said they fully knew where their data is stored, and according to Flexera, organisations estimate they waste 32% of cloud expenditure. Not entirely surprising when so many businesses expand cloud workloads in response to events or sudden requirements rather than as part of a strategy. Hybrid architectures should give organisations the agility to thrive in a data-driven world, mixing on-premises data centres, colocation, and cloud. Enterprises of all sizes should have the power to right-size workloads, deploying policy-based and coordinated service provisioning and management. These are capabilities that organisations need to be ready for edge computing. They need to get their hybrid architecture in order – but how to do it when time and expertise are short? Refurbishing hybrid architecture for the edge Companies should start by revisiting what they want from the cloud, reassessing whether their cloud and on-premise environments meet their unique business objectives. They need to understand their current patterns of resource consumption across their entire architecture. For instance, lack of visibility often means IT departments are often unable to calculate the difference between on-premises and cloud costs for the same size of virtual machine (VM). And in many businesses, there is often poor understanding of where to locate workloads for optimal cost and performance. Workloads with stable performance requirements can, for example, be more cost-effective in a private cloud on a longer-term contract. Architecture also needs to be flexible to cope with peaks and troughs, with the capacity to scale automatically within parameters. Next-generation cloud management tools What businesses need, SMEs especially, is a far more effective set of management tools for hybrid architectures. Preferably, they should adopt a next-generation cloud platform that optimises cost and performance regardless of environment, providing control and a transparent view of data, and unifying management across all clouds and on-premise data centres. These more sophisticated solutions address security and all the management difficulties of hybrid architectures across multiple cloud environments. Organisations regain visibility and control and therefore choose providers and services for the best value. They can deploy, allocate and migrate resources, using a plan they have developed in collaboration with the platform provider.

Just as importantly, these next-generation tools are designed with edge computing in mind, unlike the cloud management platforms provided by the big names in public cloud infrastructure. The next generation of cloud platforms enable IT departments to obtain all the innovation of edge computing while maintaining, controlling, and optimising access to the most sensitive data and workloads wherever they are – including secure locations.

Hybrid architectures should give organisations the agility to thrive in a data-driven world, mixing on-premises data centres, colocation, and cloud Doing the groundwork Organisations should prepare with a workload assessment, using bestin-class software to perform a stock-take of an organisation’s entire industry, identifying usage of every server. This will examine different cloud environments and recommend where workloads should go in line with the business objectives of each deployment. Inputs into this process should include the organisation’s expected AI and machine learning needs, requirements for data orchestration, along with specific security and compliance requirements. Configuration is then accomplished in line with a business’ specific needs, making the different hardware and software elements fully interoperable. Once they have a handle on all their environments and requirements, organisations should also ensure they have maximum flexibility and resilience via cloud on-ramps, such as Megaport, which provides high availability of cloud services and the ability to add or change cloud connections. Fast fibre connections to the big public cloud providers’ hubs are also important since organisations will still need to use the resources of the hyperscalers. It makes sense to store certain types of data with AWS, Google, or Azure, and of course, to continue using proprietary applications. A genuine edge infrastructure platform Businesses must then ensure the edge infrastructure platform they adopt really does have the network of strategically-sited regional data centres and fast connectivity with the major cloud providers’ hubs. It must offer low latency and full route diversity to provide maximum resilience. A genuine edge platform will have an ecosystem of partners and specialist providers from which any newcomer to edge computing will benefit, accelerating implementation and amplifying the benefits. If businesses approach edge computing with the right partners, they can draft an edge strategy that leaves little to chance, enabling them to understand which use cases will deliver the greatest gains and what is practical to start with. They will achieve the right balance between use of their edge data centre and the hyperscalers. A business should have the freedom to decide where data should be located and how it wants to access it, whether for compliance, latency, or cost reasons.

Q4 2022 www.datacentrereview.com 33


INDUSTRY INSIGHT

Industry Insight: Time to get clean Chris Pennington, Director of Energy and Sustainability at Iron Mountain, explores how clean energy can help drive data centres towards a sustainable future.

Securing a sustainable future is more important today than it has ever been. Specifically, it is a race against time to achieve net zero carbon emissions, because whilst we are making material progress towards this goal, climate scientists are clear – we still have a long way to go. Fortunately, the pace of corporate action in this area is accelerating. As we continue to fuel the tides of change, companies need to focus on solutions that not only help them achieve their carbon reduction goals, but assist their customers in achieving their own climate goals too. These positive impact solutions become a force multiplier as we collectively fight with rigour for a net zero future. Data centre operators who buy large amounts of electricity can make huge strides in carbon reduction by embracing renewable energy across their operations. As they invest in renewable energy procurement for their facilities, they can pass on its benefits to their customers and thus the development of a clean energy ecosystem begins. Encouragingly, customers of corporate colocation data centres are increasingly seeking more sustainable energy supplies, which thanks to recent progress, they will be able to access more and more. Operators in our industry, Iron Mountain amongst them, have proven that renewables are a reliable and cost-effective energy source by activating innovative procurement solutions, and it is making clean energy more accessible to all. We’re all in this race together. Ultimately organisations should work with each other to share the lessons learned within organisations like the Clean Energy Buyers Association (CEBA), which is committed to supporting global energy decarbonisation. Simply put, data centres must recognise that we have a significant opportunity to both serve our customers and work to reverse the damage to our planet. Environmental awareness should be woven into everything we do as we seek to address the challenges to sustainability in our industry and what we can do to overcome them. Our shared goal should be to demonstrate to our customers that we can help them to achieve their own sustainability goals as well.

Renewable energy is not only environmentally and socially responsible, but also costeffective and reliable A green approach to power supply While just a small part of the IT sector overall, data centres contribute substantially to its power usage and carbon emissions. The need for such operations in this day and age is unavoidable, but there are things that can be done to minimise the carbon footprint. Supporting clients in their pursuit of zero carbon is core to this. Data centre companies purchase impactful renewable energy at scale and can provide their customers the power they need as they need it, from a single rack to an entire data hall. Developing further renewable supply channels is also key. We now know that it is possible to create renewable energy procurement solutions at a competitive cost, and if more data centre companies work towards this, more data centre customers all around the world will be able to enjoy long-term clean energy contracts with stable costs.

34 www.datacentrereview.com Q4 2022


INDUSTRY INSIGHT

It is also important that the data centre itself is considered as part of the environmental solution so that operations are sustainably minded, end-to-end. Green building certifications, including BREEAM for example, are the first step on this journey. Fundamentally, renewable energy is not only environmentally and socially responsible, but also cost-effective and reliable. Those with concerns over reliability can be rest assured that the colocation data centre space is deeply focused on uptime reliability for clients on onsite generation. Alternative resources for a low carbon future There have been questions in recent years about the potential for new alternative energy resources being leveraged for backup power and there are plenty of opportunities to be explored. Batteries, for example, are a likely contender in the nearer term, as the ability to add in additional incremental storage can be realised as battery costs decrease. Therefore, designing a new facility to incorporate megawatt storage space may become increasingly efficient. Other solutions, such as hydrogen, are a more difficult proposition today, but should be considered on a long-term basis. Hydrogen certainly has a role in the low carbon future, but currently, there are big barriers to overcome before we can effectively harness it as an energy resource. The cost of hydrogen is high, and the distribution will be challenging, so governmental buy-in is necessary before this option can begin to be fully explored. Looking ahead From both an environmental and financial standpoint, there are many opportunities to keep improving once you have implemented a clean energy system. Tracking emissions with both market-based and location-based methodologies is integral, as it allows businesses to recognise the mac-

ro-impact of their renewable energy procurement and track how their local grids are also getting greener. The more we can all drive renewable energy on to local grids, the more performance will improve – for all of us, across the board. Financially, you need to gain more awareness of when carbon-free power needs to be placed on the grid to become fully decarbonised. Renewable power is the lowest cost power out there, whereas unmatched (carbon-based) hours are the most expensive. Thus, the more you invest in renewables into your local grids, better align your consumption with when clean energy is available, and develop storage to bring these two pieces together, the lower your cost of power will be and the more resilient your grids will become.

Committing to sustainability is no longer a ‘nice’ thing to do, but an essential one for longevity in an environmentallyconscious world Ultimately, although driven by the desire to simply protect our planet, embracing clean energy is very much geared for the greater good of our industry into the future. Committing to sustainability is no longer a ‘nice’ thing to do, but an essential one for longevity in an environmentally-conscious world. Together, we must focus on achieving carbon-free energy provision for every hour of every day, as per the UN Compact on Energy 2021, and strive for a greener grid overall – the benefits will not only be a healthier planet, but a healthier bottom line.

Q4 2022 www.datacentrereview.com 35


PRODUCTS

Teksan’s uninterruptible power for data centres

T

eksan offers innovative products and guarantees the energy security of data centres in many countries worldwide with its project-specific solutions, expert team, and experience. Teksan carries out many projects with internationally recognised earthquake-certified products that meet Tier III and Tier IV criteria and that are recognised by the Uptime Institute for data centres. Teksan was preferred for the data centre of one of the world’s leading IT companies in Poland and for the data centres of critical public institutions in several leading European countries. In an important project of a public institution in Turkey, two 312kWA Ford Ecotorq engines from Data Center Group were used with a seismic isolator and seismic limiter, low noise level container, fire detection, and extinguishing system, and exhaust filter system that can work continuously and has a special cold weather heating system as per the agreement. Teksan’s project-specific solutions and seismically certified products that meet Uptime Tier III and Tier IV criteria continue to be used in major projects in many countries around the world. Teksan • info@teksanuk.com www.teksan.com/en

Smart Camera Box for video surveillance

T

he Smart Camera Box from Phoenix Contact connects PoE devices with Ethernet networks. As an example, IP cameras can therefore be connected to a video server. At the same time, the all-in-one device replaces modular control cabinet solutions. This saves a great deal of time during planning and installation. Extensive control cabinet planning, complex ordering processes, as well as installing and wiring devices in a control cabinet are no longer necessary. Whether for splicing FO lines, for protecting the electronics against overvoltages, or for managing the Power over Ethernet ports – all functions necessary for the video system are included. With its compact design and the integrated mounting adapter for wall and mast mounting, the Smart Camera Box can be mounted quickly and securely by just one person. Phoenix Contact Ltd • info@phoenixcontact.co.uk www.phoenixcontact.co.uk

Automated data centre inventory tracking with custom UHF RFID and NFC labels

A

worldwide ICT company needed to automate how their servers were tracked and managed. With thousands of high value ICT assets in play, the ability to report without error on real-time asset whereabouts proved essential for both commercial success and compliance. In addition, the company was looking for ways to enhance cable maintenance intervention speed and accuracy. Brady Corporation suggested the solution: automated, real-time asset tracking with passive, custom on-metal UHF RFID and

NFC labels. Relevant asset locations, time-stamps, and other data are available in real-time at the click of a button. Staff no longer have to manually count assets and can assess a site’s entire ICT inventory in a couple of hours, instead of weeks. The data also enables the company to prevent errors in asset movement through automatic alerts generated via the supporting software. This increases overall efficiency and decreases labour cost. Additionally, compliance with various regulations worldwide is easier when whereabouts on the entire ICT inventory are available almost immediately in a central location. Brady Corporation • csuk@bradycorp.com www.brady.co.uk

36 www.datacentrereview.com Q4 2022


PRODUCTS

???

?

??? ???

???

?

??? ???

???

?

??? ???

Q4 2022 www.datacentrereview.com 37


FINAL SAY

Navigating the energy crisis Grid strain in a number of major European nations is proving an ongoing threat to uptime for colocation data centres. This challenge has been compounded by the global energy crisis, with soaring bills further impacting the stability of supply. Billy Durie, Global Sector Head for Data Centres at Aggreko, discusses the potential for flexible energy models to help navigate this challenging period.

A

s European society becomes increasingly digitalised, data centres are becoming more important than ever to the way the population operates, both personally and professionally. According to Domo’s Data Never Sleeps 6.0, it is estimated that the average person creates 1.7 MB of data per second. Naturally, such large quantities of data must be processed, transferred and stored, with this task falling to the world’s data centres. While this uptick in data usage presents an opportunity for the data centre market to grow, it is not without consequence, with the FLAP-D market in particular bearing the brunt of this. For instance, in the Republic of Ireland, electricity consumed by data centres has jumped 144% in five years, leading supplier EirGrid to introduce strict legislation around establishing connections for new facilities. Moreover, this has aligned with one of the greatest challenges to date in the global energy crisis, which has resulted in increased energy bills and greater frequency of power outages for data centres. Together, these challenges are threatening to undermine the projected growth of the European data centre market. As such, operators must now look towards alternative means of energy procurement in order to guarantee the security of their facility. The state of play In an effort to assess how data centre operators are currently managing these challenges, Aggreko commissioned a new report, The Power Struggle – Data Centres. As part of this research, 253 industry professionals across the UK and Republic of Ireland were surveyed in April 2022, from Junior Manager up to C-suite Executive roles. The headline findings chime with the issues

38 www.datacentrereview.com Q4 2022

outlined above, with 50% of those surveyed citing a significant increase in energy bills in the past three years, while 65% have experienced a power outage in the past 18 months. Moreover, industry confidence in government support was lukewarm in both the UK and the Republic of Ireland, indicating that operators do not believe that these challenges will be easily rectified through legislative action. Given that grid strain and the energy crisis are symptoms of more deep-rooted problems, there is a clear need to re-evaluate approaches to power procurement in the data centre sector to avoid encountering these issues again. Energy as a Service A possible alternative to a traditional grid connection is Energy as a Service (EaaS) or Power Purchase Agreements (PPAs), wherein end-users access power through a subscription contract. The advantages to this approach are clear, allowing facilities to reduce their reliance on an increasingly unstable grid connection without the need for outright purchase of new equipment or directly managing its use. This concept has grown in popularity with data centre operators in the past few years, with 51% and 49% of respondents in the UK and Ireland respectively now considering generating their own energy on-site. However, despite the potential of EaaS to help mitigate the energy crisis, it is not without its limitations. For example, many of those surveyed claimed that their current EaaS contracts were subject to fixed-term pricing on a oneor two-year basis. When taking into account the rapidly fluctuating price of energy, it is easy to see how this could be counterintuitive. This concern is even more troubling given that some EaaS providers issue penalties for excessively high usage, meaning that operators may be forced into paying unavoidable fees.

Given that grid strain and the energy crisis are symptoms of more deep-rooted problems, there is a clear need to re-evaluate approaches to power procurement in the data centre sector The future of flexible energy That said, EaaS can still prove a viable solution with some slight alterations to traditional models. Hired Energy as a Service (HEaaS), for instance, is a variation of this system that allows operators to access the benefits of EaaS models without the drawbacks. Here, operators are able to scale their output up or down as desired to help work around demand, without running the risk of incurring penalties. A truly flexible system such as this, provided by an adaptable supplier, is a key technology in enabling data centres to navigate the combined effects of grid strain and the energy crisis effectively. However, it is important to recognise that these challenges are likely to reoccur, and that only through more large-scale adoption of flexible energy models will data centres comprehensively secure their energy supply and rise to the growing demands of the industry.




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.