UPS/Standby Power Demanding resilience (and sustainability).
Could ‘used’ actually mean better?
Stop testing, you might find something wrong.
News 04 • Editor’s Comment The relentless winds of change.
06 • News The latest stories from the sector.
Features 14 • UPS & Standby Power Marc Garner of Schneider Electric UK&I highlights the importance of creating critical power systems that can support the growing demand for both sustainability and resilience.
26 • Cooling With power densities among the hyperscalers continuing to snowball, how do operators go about meeting the cooling challenges of these ever-growing and varied power densities? Phil Smith of Vantage tells us more.
34 • Cybersecurity The cybersecurity challenge is evolving, but there are a number of simple tips for a CISO to meet that challenge, as explained by Rob Allen of Kingston Technology.
Regulars 31 • Industry Insight Old IT equipment need no longer end up on the scrap heap. Here, ITRenew president Ali Fenn redefines the meaning of ‘used’.
36 • Products Innovations worth watching.
38 • Final Say Is a ‘Zero Testing’ environment the holy grail of software engineering or a viable reality? Anbu Muppidathi of Qualitest Group elaborates.
Editor’s Comment Considering human beings don’t like change, we’ve had to deal with a hell of a lot of it over the past year and a half. But, it has only gone to show how resilient and capable of adapting we actually are. So, if like me you’re berating yourself for having not used this ‘time’ more wisely, just remember, no one could have foreseen how long this was going to last, and if you’ve managed to make it through what can only be described as a monumental time in our history, then give yourself some credit. Life has not been normal. But speaking of change, this will in fact be my very last issue of DCR, as I hang up my hat as ER and DCR editor to pursue pastures new. The data centre/tech side of things has always been where my main interest has lay, so when I was offered the opportunity to focus solely on this, I just had to take it. I’ll still very much be milling around the data centre industry, so if you’d like to know where I’ve gone, you can find me on LinkedIn, where I will be updating my whereabouts in due course. I also believe I am the only Claire Fletcher associated with data centres, so my common name shouldn’t thwart me. Our beloved Kayleigh Hutchins will be taking over as editor on Data Centre Review as of 1 September and our contributing editor Jordan O’Brien will be taking the reins on Electrical Review. I wish the guys both the best and would like to thank the team for all their hard work over the last three and a half years. A massive thanks also extends to our readers past and present, as well as all our amazing clients – we couldn’t do this without you guys. But from me, that’s all folks! Claire Fletcher, Editor
Claire Fletcher firstname.lastname@example.org
Jordan O’Brien email@example.com
DESIGN & PRODUCTION
Alex Gold firstname.lastname@example.org
GROUP ACCOUNT DIRECTOR
Sunny Nehru +44 (0) 207 062 2539 email@example.com
Kelly Baker +44 (0)207 0622534 firstname.lastname@example.org
Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: email@example.com SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register
Electrical Review is published by
2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media
Average net circulation Jan-Dec 2018 6,501
Follow us on Twitter @DCRmagazine
Join us on LinkedIn
4 www.datacentrereview.com Q3 2021
5G: ARE WE A COUNTRY OF SECRET CYNICS?
The latest highlights from all corners of the tech industry.
Anyone questioning the safety or merit of 5G on any type of public platform are seemingly being vehemently chastised as one of the ‘tin foil hat brigade’. But behind the scenes, it would appear a large portion of us have been doing exactly the same thing. New online data has shown that the UK is actually the second most ‘sceptical’ country in the world regarding 5G, with nearly 94,000 searches a month concerning the possible negative implications of the technology.
But, but, we’re not ready for 6G, are we?
Could demand for on-premises software surpass the cloud? ccording to a new report from Dimensional Research and sponsored by Replicated, the current customer demand for on-premise software is equal to that of public cloud, and more than 90% of companies surveyed said their on-premise sales continue to rise. In addition, across the more than 400 software vendor engineers, CTOs, CIOs and other decision-makers surveyed, more than 70% who don’t currently offer an on-premise option are already planning to in the near future. Bottom line: customer demand for on-prem software delivery isn’t slowing down anytime soon. While the majority of companies cited privacy compliance as the top reason for using on-premise software delivery, others noted that the ability to integrate with established solutions, reliability and customisation options topped their list of why they added the option to their line-up.
6 www.datacentrereview.com Q3 2021
If you’d only just managed to get your head around 5G, IDTechEx are way ahead of the curve, exploring what materials – or more specifically, essential material – is needed to make 6G technology a reality. And it may or may not surprise you to know that the essential material needed for this cutting-edge technology? Graphene, the world’s thinnest material. 6G will initially launch at a few hundred GHz where several diode and transistor technologies are available in the laboratory.
COULD OUR DEMAND FOR DATA CENTRES HAVE A POSITIVE EFFECT ON POWER?
Papa John’s and Google Cloud come together to accelerate digital transformation
We rely on data centres to live our day-to-day lives, but in doing so, these facilities kick out a lot of power which isn’t all that great for our environment. This is why WZMH Architects have devised a new concept that would harness the ‘wasted’ energy from data centres to create energy sharing opportunities for other buildings. With an increase in IoT technologies, 5G and other enhanced computing, comes the need for hardware to store all this data. Enter data centres. So essential to our modern lifestyles, that data centres are being constructed at a rate that will see them almost double globally in the next 10 years. But with growing demand, short lifespans and huge energy needs, data centres are also a threat to the built environment’s positive environmental progress. That’s why WZMH Architects have come up with a new concept in their Innovation Lab that would harness otherwise ‘wasted’ energy from data centres to create energy sharing opportunities for other buildings. By co-locating residences and commercial buildings near data centres, WZMH envisions a ‘DC Microgrid-based Community’, which would see these buildings benefit from the sharing of sustainable and reliable power – while also providing options to reduce data centres’ carbon footprint.
8 www.datacentrereview.com Q3 2021
t would seem Papa John’s couldn’t resist a ‘pizza’ that digital action, as the company strengthens its relationship with Google Cloud in order to drive digital transformation. Google Cloud and Papa John’s International, one of the world’s largest pizza chains, have announced a new, multi-year commitment to
accelerate Papa John’s digital leadership and data cloud strategy. The company will expand its work with Google to migrate its data centres to Google Cloud, which will help meet a surge in demand for online ordering and delivery, and provide a foundation for ongoing innovations and improvements to the customer experience. Happy days.
Is the APAC data centre market about to be thwarted by its own success? The APAC (Asia-Pacific) data centre region is set to be the biggest global DC market by 2021, but doubts are surfacing as to whether local power grid supply will be able to cope with this growing demand. The speed and concentration of data centre investments in the APAC region is a huge task for supporting infrastructure, according to a new report by global temporary power specialists Aggreko. Another recent report from Digital Realty and Eco-Business suggests that the APAC region is set to be the biggest market for data centres by 2021, with the market size expected to reach US$28 billion by 2024 for colocation facilities. With hundreds of megawatts of IT load coming online, there are concerns on whether local power grid supply will be able to keep up with demand. Energy supply across APAC data centre hotspots are facing a critical juncture, with grid companies and power networks moving at slower speeds than the technology infrastructure investments. And while focus may turn towards upgrading existing networks to withstand the demands of this growing industry, the cost and time required is simply unattainable.
If you sell products or work on projects within Power, Lighting, Fire Safety & Security, Energy Efficiency and Data Centres, make sure you enter:
Visit awards.electricalreview.co.uk Your business and your team could be celebrating in the spotlight at the ER & DCR Excellence Awards Gala Dinner on 19 May 2022 at the breathtaking Christ Church, Spitalfields in London!
DCR, live from your living room (or sofa, kitchen table, pub, wherever really) Yes, that is right, Data Centre Review is coming to you live from the comfort of your own home (or wherever you have internet access) on September 7 & 9. We imagine by now we’re all getting ever so slightly sick of online events, but as the world (starts) to get back to normal, we wanted a last virtual hurrah wherein we needn’t worry about inadvertently starting a third wave of Covid-19, plus unnecessary travel is bad for the planet, right?
12 www.datacentrereview.com Q3 2021
ver lockdown, we’ve all been to various virtual events of varying levels of engagement, I’m sure. DCR Live is different, in that we shan’t bore you for hours, or we’re going to try our best not to anyway, as there is something for everyone. For a start, the event will take place over two days, 9.30 – 12.50, with a day’s break in the middle. Prior to Covid, I (editor Claire Fletcher) could barely handle two consecutive nights out, plus, we felt asking people to listen to us for two days on the trot was just a little bit conceited. The best part of DCR Live is that it will be hosted by yours truly. I obviously jest, that’s probably the worst part, I do apologise. But, we have taken the time to put together an agenda which we feel is as relevant as it is informative, with a little bit of taboo mixed in for good measure. Plus, each of our expert speakers has a 25-minute slot to showcase their stuff, which, much like Goldilocks, we felt was ‘just right.’ There is also a 20-minute break in the middle, just in case you run out of coffee, god forbid (or gin, we’re not here to judge.)
DCR LIVE 2021
Day 1: Tuesday 7 September On Day 1, once everyone is sitting comfortably, we will begin with a timely keynote from TechUK’s Emma Fryer, who will explore how our industry has been affected in the time of Covid. We then move onto analysis, where we’ll be hearing from 451 Research about the key data centre trends and opportunities for growth and investment, beyond the traditional FLAP markets. And how could we ignore 5G? For this to work, do edge and 5G need to grow together (aww) and how will this potentially game-changing technology change the data centre space as we know it? Cyber threats are also something we are increasingly lagging behind with, and continue to underestimate. As the threats we face get increasingly sophisticated, how are we supposed to react? Are the current tools in our arsenals equivalent to taking on an AK-47 with a stick? In some cases, they might as well be. Which leads us nicely onto a discussion regarding the importance of remote monitoring and where the benefits and limitations of this technology lie. And we will be rounding off Day 1 with a panel discussion addressing the skills, leadership and talent required to drive our industry to where it needs to be. The future isn’t created with potential. Day 2: Thursday 9 September Still can’t believe it will be September when all this is happening. Perhaps the rain here in Newcastle will finally have stopped and summer will have been, or perhaps I will have started a new life in an ark. After the year we’ve had, anything is possible. But, if I haven’t sailed away to start a new life on the open sea, I will be here on Day 2 to introduce our keynote speaker — Michael Adams, CEO of DataCentreSpeak Consulting. I met Michael at a (real-life) event once, and he honestly brightened my entire day. His eccentricity, out there (yet scarily accurate predictions) and industry experience mean I could honestly listen to what he has to say all day. But, we have 25 minutes of his wisdom, where he will be discussing Brexit and what that means for our industry. Michael will be followed by Marc Garner of Schneider Electric, who will be considering the next wave of digital transformation and how the data centre of the future will need to up the ante if it is to accommodate. Everything from how data centres are becoming more sustainable to meet business needs responsibly, more efficient to optimise cost, speed, space and capital; more adaptable to be designed for new technologies; and more resilient to reduce vulnerability to unplanned downtime, how different will things really look? Speaking of how different things will look, construction is changing and it’s changing in the form of smart buildings and smart cities. Yet, behind all the fancy tech, smart cities need smart data centres to function. Stijn Grove, managing director at the Dutch Data Centre Association, joins us to explore what actually makes a city ‘smart’ and highlights why not all ‘smart cities’ are created equal. And if we’re going to talk about smart cities, we absolutely couldn’t leave out sustainability, the word currently on the lips of DC operators the world over. Carrying on from our previous speaker, Ian Bitterlin, consulting engineer at Critical Facilities Consulting, will be asking:
does smart automatically equal sustainable? And with consumer demand for green credentials on the rise, how can facilities substantiate and prove these sustainability claims? Our penultimate speaker, Steve Bowes-Phipps, senior digital infrastructure consultant and PTS consulting & board advisor at Data Centre Alliance, will be exploring the edge and whether this technology is the key to facilitating a reduced carbon footprint and reach that all important holy grail of sustainability, as well as help bring to life new technologies, such as autonomous vehicles, e-health, e-sports and the aforementioned ‘smart city’.
We have taken the time to put together an agenda that we feel is as relevant as it is informative, with a little bit of taboo mixed in for good measure Last, but certainly by no means least, we have our closing keynote, to be delivered by Tarquin Folliss, vice chairman at Reliance acsn. This is probably one of our more taboo subjects as we discuss, ‘Life after Huawei’. Now that Trump has left the (centre) stage, what is the status of business with the Chinese telecoms giant? How will the 5G roll out progress with all Huawei components stripped away? Are Western countries, the UK included, now going to be too reliant on a small number of less capable tech vendors? I for one am very interested to hear what Tarquin has to say on the subject, as so many remain tight lipped. Questions? But that’s about it covered, I think, should you have any questions for any of our speakers, you can send them through during the event and I shall be filtering through them and getting as many answers live as time allows. If your question can’t be addressed live, some of our speakers will be hanging around after their sessions to answer anything outstanding via our convenient little chat box that will be at the side of your screen throughout the event. If anyone has to rush off, email addresses will be provided, so rest assured, you will have your answers! Fingers crossed, events like this will soon be back to happening in the flesh, where you needn’t have to supply your own alcohol and don’t have the cat staring at you for the duration of your ‘networking’ time. Anyway, we do hope you can join us on one or both of the days. You can register for the event via www.datacentrereviewlive.com and if you can’t join us live, the content will be accessible via our YouTube channel after the fact, so all is not lost! We hope you enjoy the line-up.
Q3 2021 www.datacentrereview.com 13
Demanding resilience (and sustainability) These days, if you can’t offer your data centre customers the levels of resilience and sustainability they require, then you might as well get out of the race. Here, Marc Garner, VP, secure power division at Schneider Electric UK&I, highlights the importance of creating critical power systems that can support these evergrowing demands.
oday, energy efficiency is considered one of the key pillars of data centre sustainability. However, a challenge that runs in parallel is the need for mission-critical reliability. Often when a battery backup system becomes more resilient, such as an Uninterruptible Power Supply (UPS) deployed in an N+1 configuration, efficiency is the first aspect sacrificed. Owners and operators, therefore, need not just consider the types of infrastructure they are deploying more carefully, but also the design of their data centres, the circular attributes and the ability to integrate with both renewables and the grid. Having a well rounded, highly-efficient or holistic approach is not only good for reducing operational expenditure (OpEx), total cost of ownership (TCO) and carbon emissions, it’s fundamentally good for the environment.
Why sustainability matters In recent years, data centre operators have come under increasing pressure to make their facilities more efficient, environmentally friendly and sustainable. A growing global awareness of the effects of climate change, combined with end-user demands for sustainability, has seen a number of transformative initiatives take place within the sector, including the emergence of the Climate Neutral Data Centre Pact, setting ambitious
14 www.datacentrereview.com Q3 2021
targets to help operators become carbon neutral by 2030. In response, trade associations such as the European Data Centre Association (EUDCA) and Cloud Infrastructure Providers in Europe (CISPE) have helped to create a Self Regulatory Initiative that sets standards for sustainability and a drive to meet EU targets. Both of these bodies have members who operate both inside and outside the EU, so their regulatory initiatives will apply to data centre operations across the continent as a whole. Among the measures agreed is a commitment to ensuring that all new data centres in Europe will meet an annual Power Usage Effectiveness (PUE) ratio of 1.3 or 1.4, depending on the climate region in which they are located. Best practices mandated by the initiative include commitments around energy efficiency targets, carbon-free energy generation, water conservation and the circular economy. By some estimates, energy is responsible for over 80% of the world’s CO2 emissions and data centres are estimated to represent between 1-2% of global electricity consumption. Add to that the tremendous growth of data centre capacity, with commercial property giant CBRE anticipating that Europe will see a surge of over 400MW of new data centre space built in 2021 – approximately 20% more than recent years – efficiency and sustainability are, therefore, more critical than ever. Customers are also looking to align with organisations embracing sustainable business practices. A recent survey by 451 Research found that 97% of colocation customers are demanding contractual commitments to sustainability, and of the 800+ global operators surveyed, more than half believe that efficiency and sustainability will be important competitive differentiators within three years. A colocation provider who ignores or diminishes the importance of efficiency and sustainability can be rest assured that their competitors will not. Yet, while 55% of surveyed operators were already taking some action in this regard, there is still more work to be done. Efficiency and resilience The evolution of today’s digital economy has meant that application uptime and uninterruptible power are, in essence, business-critical. Power protection systems that safeguard against service disruption are paramount, but the need for sustainable backup solutions is also abundantly clear. Data centre UPS systems must incorporate features that provide assurance against downtime without placing unnecessary additional burdens on overall power consumption. From a conceptual point of view, modular UPS’ that can be right-sized or scaled to match their load, ensure that the risk to IT infrastructure is mitigated by just the ‘right’ amount of battery backup. Another key aspect is the operating mode, which can boost the efficiency of a UPS while barely compromising on the level of redundancy offered. Modes such as this can enable users to enjoy the highest level of energy savings without sacrificing load protection. Schneider Electric’s patented ECOnversion mode, for example, offers UPS efficiencies of 99%, while also boasting pioneering safety features, such as its ‘Live Swap’ function. This allows power modules to be added or replaced while the UPS is online and fully operational – ensuring unscheduled downtime is kept to a minimum during the replacement process. UPS systems with longer battery lives, especially if they can withstand a much greater number of charge and recharge cycles, also offer many
advantages in terms of sustainability. Those powered by lithium-ion (Liion) can offer users longer battery life, a lower TCO over the lifecycle and reduced carbon emissions. Further, the greater number of charge and recharge cycles offered by using Li-ion chemistries provides the possibility of collaborative measures, such as peak shaving and micro grids. These allow stored energy to be utilised efficiently to reduce the demand on mains power. Peak shaving applications can also ensure that higher tariffs, designed to encourage operators to remain within agreed power-consumption levels, are avoided by switching temporarily from mains to battery supply as limits are approached. Such capabilities, therefore, offer the user a means of both integrating with renewables and the grid, while delivering sustainable power protection.
Having a well-rounded, highly-efficient or holistic approach is not only good for reducing operational expenditure (OpEx), total cost of ownership (TCO) and carbon emissions, it’s fundamentally good for the environment Sustainable, circular considerations A final aspect to consider is the circularity attributes of an uninterruptible power supply. Schneider Electric is committed to providing data centre operators with the technology to combine efficiency and sustainable operations with needs for maximum resilience. To achieve this, products labelled as Green Premium can ensure vendors are crystal clear about the sustainability impact of their hardware systems, further helping end-users to truly gain a greater understanding of their embodied carbon footprint. Such aspects include transparent environmental information about products, minimal use of hazardous substances and compliance with regulations such a Restriction of Hazardous Substances (RoHS) and the European Union (EU) Registration, Evaluation, Authorisation, and Restriction of Chemicals (REACH). Further, environmental disclosures such as a Product Environmental Profile (PEP) or circularity profiles provide end-users with guidance on responsible product end of life treatments along with circular value propositions. Such measures enable owners and operators to take a step further in their sustainability considerations and build upon their energy efficiency considerations. Today, balancing the need for sustainable power protection alongside demands for resiliency are paramount. Yet by carefully considering the type of UPS technologies deployed, the design of the system, and by broadening the sustainability conversation to include renewables and the circular economy, data centre operators now have the means to ensure operational continuity, while minimising impact on the environment.
Q3 2021 www.datacentrereview.com 15
Getting you on the right track In this article the experts at Starline highlight the importance (and benefits) of evaluating flexibility in your track busway system and why not all systems are created equal.
or data centre owners and Consulting and Specification Engineers (CSEs), busway systems are rapidly becoming the solution of choice for effective power distribution. Track busways can be suspended from the ceiling, placed on vertical supports, or even mounted on the server cabinets themselves to provide a direct power source to servers and racks. Busway systems offer numerous advantages over traditional Remote Power Panels (RPPs). Suspended or mounted track busways eliminate the need for an RPP, allowing you to make better use of data centre space. Busways also eliminate the need to run power cables and whips under plenums in a raised floor, allowing cooling air from below to flow unobstructed to the servers. If in-row cooling units are used along with busways, it eliminates the need for the raised floor itself. A busway system gives you full visibility over your power distribution system, making it easier to do maintenance and troubleshooting. Unlike RPPs, electricians don’t have to shut down the entire system or do hazardous work on a live and exposed panel in order to change out a
16 www.datacentrereview.com Q3 2021
single circuit breaker. Using plug-in units (also known as tap-off boxes), which are inserted into the busway’s open channel and connected to the internal busbars, you can easily and safely swap out old circuits and replace them with new ones in a matter of minutes. But not all busway solutions are the same; some busway systems work better, last longer, and require less maintenance than others. When evaluating a busway system, it is important to consider the system flexibility. An effective busway system gives you flexible design and power distribution options, allowing you to build out, scale up and adapt your IT loads according to your changing power and facility needs. Importance of flexibility Power requirements in data centres are in a constant state of flux, as IT infrastructure is continuously deployed, scaled up or reorganised according to the company’s needs. Also, many large data centres are now upgrading their power systems at the rack level, from single-phase to higher-voltage (such as 400V or even higher) three-phase power. Facility owners need power distribution systems that can meet the
ever-changing, ever-increasing power demands of their IT footprints. You need a busway system that is easy to install and provides the flexibility of a customisable design to meet the layout of the IT infrastructure within your facility. You also need a busway system that is adaptable to different power levels, allowing you to scale up as the power needs of your IT deployments change. And you need a busway solution that allows for easy maintenance and replacement of parts. Track busways If a busway system is the ‘elevated highway’ that allows electrical power to travel from the PDU to servers and racks, then individual track busway sections are the ‘straights and curves’ that make up that ‘highway.’ There are typically the four types of busway sections: • Straight Busways (a): The main busway sections that deliver power to the IT infrastructure. • Elbow Sections (b): Used to make a horizontal, 90-degree turn in a busway run, by joining two straight sections. • Tee Sections (c): Used to create a 90-degree
branch leg, by connecting three different straight sections. • Power Feed Units (d): A unit that supplies incoming power from the PDU to the busway. A power feed unit may also include power monitoring equipment, a serial, Ethernet, or Wi-Fi connection for reporting data, and an Infra-Red (IR) window for thermal scanning. In a data centre, existing infrastructure is often space constrained. A track busway system should offer flexibility of design, allowing you to create system layouts that help you to make full use of your IT deployments. The busway system should utilise not just straight busway sections of different lengths, but also elbow and tee sections – not every busway solution has these. Also, the system should offer flexible options for where to place power feed units, including ‘end feed’ units, which are installed on the end of the busway run, and ‘above feed’ units, which are installed along the topside of the busway. In some cases, utilising tees and elbows can reduce the number of end feed connection points that you require. Additionally, you should look for a solution that offers busways with a wide range of amperage options, with plug-in units being interchangeable between the range. A good range for continuous track busways is from 40 to 1,200 amps. This will enable you to scale up your power delivery options easily as your power needs change over time. Joints A joint provides a connection between adjacent busway sections, or between a straight busway and an elbow or tee section. Different busway providers use different types of joints, but you should look for vendors that utilise the most reliable kind of joint – a compression-fit joint. A compression-style joint kit consists of (a) a bus connector – that is, copper blade busbars secured to an insulating mounting plate – and (b) a pair of housing couplers. The joint should be easy to install but should have elements, such as plastic blockers in the housing couplers, that prevent it from being installed incorrectly. The busbar blades on the bus connector provide the electrical connection between busbars in the two adjoining sections. The two housing couplers are then used to connect the aluminium housing of the two sections at the top and bottom of the joint. With this kind of joint, the mechanical connection is entirely separate from the electrical
Suspended or mounted track busways eliminate the need for an RPP, allowing you to make better use of data centre space
connection. Even if the screws that secure the couplers become loose, the electrical connection between busbars will remain intact. Joints are an essential element in flexibility of busway design, allowing you to link together track busway sections to form a busway run. What is important is to have a strong joint that works in tandem with the other elements to form a durable and dependable busway system. Plug-in units The plug-in units (also known as tap boxes) distribute the branch circuit power load from the busways to the servers, racks, or other equipment. Plug-in units can be easily added or removed to the busway units as needed. A busway solution should offer a wide range of plug-in units to handle different power demands and topologies. For example, if you decide to upgrade your power distribution at the rack level from single phase to three-phase power, you should be able to easily upgrade your busway system by buying new plug-in units to handle the increased power requirements. Additionally, plug-in units should be compatible
for use with busways of different power levels, for example 250, 400, 1,200 amps. Power monitoring Today, many data centre owners do power monitoring at the PDU and rack PDU level, but this is not enough. For a more complete package of data, and to ensure the reliability and safety of your entire power distribution system, you should look for a busway solution that allows you to do power monitoring at various points along your busway runs. A power monitoring system should have the ability to monitor up to six single-phase branch circuits or two three-phase branch circuits from the same meter. The plug-in units and end feed units should enable you to monitor and report power use at the rack PDU level, as well as over the entire busway run. Additionally, a power monitoring system should offer flexible data reporting options, through wireless Ethernet (802.11), wired Ethernet, and/or serial communications. It should be able to simultaneously use all reporting protocols. It should also offer an embedded web page for access to system configuration or data, or easy integration with your Building Management System (BMS) or Data Centre Infrastructure Management (DCIM) system. Conclusion For data centre owners, CSEs and others who are seeking power delivery solutions, the ultimate goal in selecting a busway system should be peace of mind. You want the certainty and confidence that your power distribution system will always be able to deliver the power you need to your servers, racks or equipment. A busway system is not just a power solution. It provides a competitive advantage, allowing your data centre to stay operational, and delivers flexibility to adapt to the layout and changing power needs of your facility. www.starlinepower.com/busway firstname.lastname@example.org +44 1183 043180
Q3 2021 www.datacentrereview.com 17
Prevention is better than cure With summer maintenance season in full swing, Chris Cutler of Riello UPS explores what a preventive maintenance visit should cover and explains how they help data centre operators get the most out of their uninterruptible power supplies.
f the pandemic has taught us anything, it’s that data centres are now as indispensable as utilities like gas, electricity and telecommunications. When the world turned to technology throughout lockdown, it was left to the packed racks in data centre server rooms to carry the load, in more ways than one. Uninterruptible power supplies have played a crucial role too, the unsung hero working quietly behind the scenes to minimise the risk of mission-critical infrastructure going offline. But a UPS system is a sophisticated piece of machinery in its own right, while wear and tear is unavoidable. So, as we enter the busy summer maintenance period, it’s probably a good time to think about giving your UPS a little bit of TLC to make sure it remains in tip-top condition. For data centre operators, an ongoing maintenance contract is an invaluable safety net. If the worst happens and there’s a fault with the UPS, a plan sets out the guaranteed emergency response times for engineer callouts. It also includes the provision for at least one planned preventive maintenance visit (PMV) a year. A PMV provides the perfect opportunity to iron out any potential problems before they have a chance to develop into something more serious. In addition, a PMV offers the chance to perform system tweaks that optimise performance and efficiency. Competence is key Before we explore the ins and outs of a PMV, it’s important you establish that the person you’re entrusting with the job is fully trained and competent. A UPS is a complex piece of kit and it’s not something any electrician or engineer will know the finer working details of. It’s all too easy for an engineer not completely familiar with the UPS to
18 www.datacentrereview.com Q3 2021
unwittingly throw an incorrect switch or carry out procedures in the wrong order and you’re suddenly facing a period of unplanned downtime. Never forget that human error is one of the most common – and often avoidable – causes of equipment failure! That’s why at Riello UPS, we introduced our own Certified UPS Engineer Programme covering both our in-house technicians and personnel from authorised UPS service partners. Successful engineers must complete rigorous training on commissioning, maintaining and servicing the UPS, and they all have their own unique ID to prove their competence. So always remember to check whether the engineer is trained for the specific manufacturer and product line. It’s not uncommon for sub-contractors to be substituted in at the last minute if the original engineer suddenly becomes unavailable.
Never forget that human error is one of the most common – and often avoidable – causes of equipment failure!
What should a PMV include? A PMV will likely start with the service engineer conducting a thorough visual inspection of the UPS to assess whether there are any early warning signs of wear and tear on any of the components. Similar checks should be carried out on the batteries too for any sign of damage, corrosion, swelling or leaking. The engineer will also physically check all the electrical connections, paying particular attention to circuit breakers, contactors, fuses, cabling, transformers, PCBs, fans, capacitors and communications slots. Many UPS maintenance providers now use state-of-the-art thermal imaging cameras for this task. An increase in heat is a tell-tale sign of a potential failure within both the overall UPS and individual electrical components – it’s a decent guide for when there might be a loose connection. Of course, the advanced thermal imaging equipment is far more adept at detecting these possible hotspots than the hand or eye of any service engineer, no matter how experienced. With the batteries, it’s important to check the terminal connections and ensure they are at the correct torque setting. Your service engineer will then proceed with several mechanical tests on the functionality of the UPS. They’ll download the historical operating and alarm logs, then carry out several tests to see whether the UPS runs properly across a variety of operating modes. Data centre PMVs often incorporate more advanced functional testing, for example, using load banks to apply dummy loads. Such a simulation enables the engineer to test the UPS and batteries at various load levels without ever interrupting your critical load, which can continuously run using the bypass supply. In addition to checking the UPS system and batteries, a comprehensive PMV should also assess the installation environment and whether there’s anything in the surroundings that could potentially cause damage. This would include dust, excessive heat or humidity, and poor ventilation.
Ideally, such issues would have been identified – and hopefully eliminated – during the initial UPS installation and configuration. But server rooms aren’t set in stone, particularly in a fast-moving setting such as a data centre, so there’s always the chance that the circumstances have changed since the last maintenance visit. The PMV also provides the engineer with the perfect opportunity to install any firmware updates. Making sure the UPS is running on the correct and latest software version might sound simple, but it can make a big impact on the unit’s performance and significantly improve energy efficiency, so it’s an important step not to overlook. Once the engineer has completed all the tests, inspections and software updates, they’ll fill out a detailed field service report. This document is packed with useful information for any data centre operator or facilities manager. It contains all the detailed readings from the engineer’s inspections along with a full report of any potential faults and recommended remedial actions, including whether any consumables or components are approaching their end of service life and may need replacing.
When the world turned to technology throughout lockdown, it was left to the packed racks in data centre server rooms to carry the load, in more ways than one Proactive approach pays dividends The key to good maintenance is to take preventive actions rather than reactive ones. So even though the majority of batteries used in data centre UPS installations come with a five-year or 10-year design life, it’s accepted industry best practice to proactively replace them in year three to four (for five-year design) or year seven to eight (for 10-year design) as it reduces your risk of a serious failure. A similar approach is advisable with two other key UPS consumables: capacitors and fans. Capacitors work together to store energy and improve power quality, and if even only one or two are reaching their end of service life, it places unnecessary stress on your UPS. While fans keep the UPS’ inverter, rectifier and other parts cool enough to operate safely. Failing fans expose these parts to higher operating temperatures, meaning they’ll deteriorate much quicker too. Your post-PMV report may recommend that it’s the time to replace ageing fans and capacitors. This is known as a UPS Overhaul and is a cost-effective way of breathing new life into your system. New capacitors and fans boost the overall performance and efficiency of your UPS, whilst also reducing the risk of a major system failure. Adopting such a proactive approach extends the lifespan of your UPS, maximising your budget and lowering your total cost of ownership. In addition, it shows shrewd future-planning as it significantly reduces the likelihood you’ll experience a costly period of downtime or need to replace an entire UPS.
Q3 2021 www.datacentrereview.com 19
Give me strength Here, the experts at Milestone Systems, the global leader in open platform video management software, explore the importance of building resilience into your data centre’s physical security.
ata is often called the new oil. It’s a vital asset that ensures our businesses, homes and cities all run efficiently and are constantly improving. Every aspect of our lives is becoming data-driven, particularly post-Covid. A consequence of this growth, however, is that data – and the data centres that store, protect and process that data – have become prime criminal targets.
Best-in-class security Because data centres are so critical to every organisation’s operations, they must be protected with best-in-class solutions to ensure data security and business continuity. When thinking of data centre security, many will first consider its cybersecurity. However, physical security is just as vital. A firewall won’t be effective if an intruder has already gained access to a server room.
A rapid rise The growth of the data centre sector has been significant and it’s accelerated due to the pandemic. In 2020, a record US $34.9 billion was invested globally in the sector and 2021 is, so far, following the same trajectory with $13.5 billion worth of deals currently in the pipeline. Having more people working remotely, socialising online and turning to solutions like video conferencing, e-commerce and gaming, has paid dividends for the data centre industry.
The cost of poor protection There’s a good reason to protect your data centre in every possible way. The cost of recovering from a data breach currently stands at $3.86 million. This only considers the financial implications of a breach. There will also be ongoing damage to an organisation’s reputation, brand and customer loyalty that could take years – if not decades – to repair. Keeping pace with evolving threats Physical security risks come in many forms, from criminals, spies and other malicious
20 www.datacentrereview.com Q3 2021
actors trying to enter a facility, to inside jobs, or even natural disasters like floods, fire and earthquakes. Security leaders must protect against every possible threat for their data centre to remain robust and impassable. Furthermore, to protect against evolving threats and needs, security solutions must be able to adapt to the times and take advantage of the latest protections. Audit existing infrastructure The first step in improving your physical security is to undertake a comprehensive security audit. This will identify your centre’s current strengths and vulnerabilities and it will help you focus your efforts on the protections that will have the greatest impact. It should tell you what data and equipment is on-site, access points that could be vulnerable, the people working on-site who need authorisation (and what areas they can access), and who regularly visits (and your procedure for this). You may also want to consider the proximity and risk of
any natural disasters like earthquakes or flooding zones and how you can mitigate this. Invest in internal protections The next step is to ensure your internal protections are up-to-date against the latest threats. As risks can evolve every day, having a regular review of your internal systems and stress-testing them against the latest threats is essential. Some of the technologies to consider are: • CCTV systems • A VMS (such as Milestone XProtect) that can consolidate all video, audio, sensor, and other data streams • Video analytics for thermal imaging, facial recognition, license plate and vehicle recognition, people counting, behaviour monitoring and so forth • A control centre with a smart wall to monitor everything occurring on-site and bring up detailed views of any suspicious activity • Access control • Infrared tripwires and mantraps • IoT (Internet of Things) sensors that can detect intrusion or emergencies like fire and flood, or even preempt equipment failure. Your video system needs to provide comprehensive visibility of everything occurring on-site and everyone who is on the premises. If a breach does occur, visual identification (and tracking) of an intruder should be rapid thanks to your video and audio feeds. Behaviour monitoring tools can flag any suspicious behaviour that should be investigated further, while facial recognition data can be cross-checked against employee credentials to ensure the right people are accessing each area. Protect your perimeter Once your internal protections are fortified, it’s time to assess your perimeter and access control. Strong protections here will ensure unauthorised individuals cannot even get close to your facilities. Anti-tailgating and anti-passback solutions will ensure only one person or vehicle enters at one time. Likewise, the list of who has access to high-risk areas should be updated regularly and the credentials of anyone who has changed roles or left must be revoked immediately. Meanwhile, your VMS should sync with access lists and your access control technologies to ensure all entries and exits are logged and authorised. Visitor and contractor manage-
Having more people working remotely, socialising online, and turning to solutions like video conferencing, e-commerce and gaming, has paid dividends for the data centre industry
ment also needs to be implemented, with your security teams able to follow the locations of all third parties in real-time through the CCTV system and VMS. Wearable trackers can also be integrated with open systems like XProtect, to provide greater details on who is on-site and where they are.
require regular reviews and updates to keep abreast of the latest risks, as well as to take advantage of next-gen solutions. Investing in an open system like Milestone will help with futureproofing your physical security system, making it easier to deploy new solutions and experiment with emerging technologies.
Consider your utilities Another consideration is your redundant utilities like electricity and water to avoid any downtime. IoT devices can monitor your systems to proactively warn of possible failures or downtime. It’s worth monitoring and controlling the air quality, temperature and humidity within different sections of your centres to ensure they cannot be exploited. Again, an open VMS like XProtect can integrate with all of these devices to provide a single source of truth for all happenings on-site.
Other benefits of an open system Having an open system will also give your security greater flexibility and scalability. As your data centre expands or you add other sites to your portfolio, the open system can adapt to the extra resources. As explained, having a single point for all physical security insights will decrease your security team’s training needs, so new devices and sites can be up and running in a minimal amount of time. With the flexibility to add new devices, as your security needs evolve you can simply integrate another solution within the XProtect VMS. You also have greater freedom to choose from a wide range of vendors, instead of being limited to just the devices that a closed system supports. This can help with multi-tenant centres where certain devices are preferred by a tenant, or if your organisation only works with specific device types and vendors. It also enables you to only pick best-in-class technologies that work together seamlessly to further reduce vulnerabilities in your system.
Preparing your people Your employees are at the frontline of your security. Everyone needs to understand their role in protecting a premises, whether that’s preventing tailgating as they enter a site, or remaining updated on the latest threats and security processes. Your security and control centre teams will need training in how to use the different security systems and processes. Investing in an intuitive VMS that consolidates the data from CCTV, perimeter protection, access control, IoT devices and more, will help reduce training time. Looking to the future Over the next five years, the rise of data centres is predicted to continue, if not increase. Synergy Research Group chief analyst John Dinsdale predicts that, “the number of operational hyperscale data centres in Europe will grow at eight to 10% per year and we expect UK growth to be in the same range.” As the sector’s profile continues to rise, so too will the threats. Security systems will
Much to consider Evidently, a lot of thought and decision-making will go into protecting your data centre from a physical breach; it is something that will evolve and can be constantly improved over time. By starting the journey now, you are making certain there are no cracks in your security. No weakness will be exploited by malicious actors and your cybersecurity will not be undermined by a breach within your walls. To learn more about the physical security of data centres, download Milestone’s latest eBook here: https://bit.ly/2Szh1Go
Q3 2021 www.datacentrereview.com 21
22 www.datacentrereview.com Q3 2021
Clinton Noble, sector manager for data centre power solutions at Finning UK & Ireland, outlines the key considerations when designing generator packages for data centre standby power.
n November 2020, a telecommunications company unveiled 20 new data centres across the UK as part of a £2 billion investment. As the number of new sites continues to grow, data centre operators must prepare by having the required backup power systems in place to consistently deliver a service. Standby power in data centres is mission critical — continuous power prevents outages from damaging mainframes and other IT infrastructure. Having a backup supply also protects customer data that would otherwise be lost, causing financial and reputational damage. There are several elements in a data centre’s operational power system, including the cooling system, the uninterruptible power supply (UPS) and high voltage (HV) and low voltage (LV) switchgear. Diesel generators are a popular choice for these backup power systems because of their reliability, fast response and long runtimes, and because they require minimal maintenance. It’s good practice to design a genset package and standby power system in accordance with the Uptime Institute. As the internationally-recognised data centre authority, it determines what each site tier requires. For instance, a Tier IV site with maximum resilience must have a continuous cooling capability, unlike a tier III site. There are also other design factors that data centre operators should consider.
It’s good practice to design a genset package and standby power system in accordance with the Uptime Institute Size Data centres consume large amounts of power and together make up around one percent of the world’s total energy consumption, according to the UK Energy Research Centre. When designing a backup generator, the first, and often most important, factor is the power rating. The standby genset must have sufficient power to keep all servers and equipment online during outages. When sizing a generator, the site’s required loads are key. Operators will need to think about start-up currents, terminal voltage, voltage and frequency variations and other factors. It’s also helpful to consider UPS efficiency, lighting and cooling loads and how these will shape the estimated requirements. There are tools available that can determine the right generator size. One example is SpecSizer, which can build a load profile and select the optimum genset. Most data centres, including new sites, have limited floorspace. Operators can overcome this challenge by selecting a generator with a high-power density. This is a measure of the number of kilowatts
produced in relation to size — the higher the power density, the greater the output generated in the plant room. High power density generators produce more power from smaller units and require less ancillary equipment than larger machines, reducing installation and servicing costs. Ambient temperature The climate and ambient temperature inside the data centre are also key. The UK is gradually getting warmer and this can impact genset performance. According to the Met Office’s 2019 State of the UK Climate report, the UK’s average temperature that year was 1.1 degrees Celsius above the levels recorded between 1961 and 1990. Site conditions, including altitude, relative humidity and variable ambient temperature are all things that designers should consider early on. If the ambient temperature in a data centre is not as predicted, the genset or the ancillary parts may need redesigning to accommodate the difference. If a data centre becomes too hot or cold, service issues can occur. Because of these temperature constraints, operators don’t need to worry about having a generator with a suitably rated ambient capability. However, they will need a way of removing waste heat that’s produced during power generation. Design options may include minimising the number of enclosures so that cooling air is not restricted and can flow more easily around the generator.
When designing a backup generator, the first, and often most important, factor is the power rating Maintaining uptime Once the new genset and backup power system is installed, carrying out regular preventive maintenance is vital. If a generator fails to start, it’s often due to fuel or battery issues. If the battery’s water levels are low, this can cause damage. Fluids like oil and coolant can also become contaminated, which can lead to engine malfunction and suboptimal performance. For instance, if there’s too many metallic particles in the lube oil, this can cause excessive wear. Contamination rates vary depending on duty cycle, load factor, age and fuel type. Data centre operators can minimise unplanned downtime by having qualified and experienced service teams onsite. These specialists will regularly monitor the battery’s condition by checking the water level and voltage and cleaning and tightening the terminals. They can also check engine fluid levels, take samples of the engine’s fluids and check for contamination with detailed reports and advice issued to the operator. Carrying out preventive action means data centre operators can spot any issues before they cause engine downtime. This makes it easier to protect sites from outages that would normally cost time and money, as well as prolong the life of their capital investment. With one telecommunication company alone planning to open 20 data centres, reliable backup power systems will be vital in protecting new sites like these from outages. Data centre operators should think about the design of their gensets and carry out regular preventative maintenance to ensure uptime.
Q3 2021 www.datacentrereview.com 23
Managing power continuity How do you correctly size a UPS system? The million dollar question! Well maybe not quite the million dollar question, but one well worth asking. An incorrectly sized UPS will impact both CAPEX and OPEX, and you may be wondering how much this will cost you in the future. Louis McGarry, sales and marketing director at CENTIEL UK Ltd, delves deeper.
o correctly size a UPS, you need to understand the actual load, and there are many things to consider, but most important is the load profile over time. Questions you may want to ask yourself include, how has the load behaved historically? What is required today? What may be required in the future? Understanding this profile allows a more accurate roadmap to be developed for the path ahead. It also means the UPS can be sized correctly on day one, but with a realistic understanding of what day two might actually look like. Using a true modular UPS system can help manage your roadmap and provide the flexibility required to adapt with any planned or unplanned changes to the load profile. This technology provides the option to add UPS modules to suit the actual load (right-size) and further modules if and when needed. A truly modular UPS should not only have the ability to match the load profile physically, it also needs the functionality to optimise the efficiency of the overall system automatically. CENTIEL’s True Modular UPS system is designed with maximum efficiency management (MEM) as standard. MEM automatically puts the modules that are not required to support the load into ‘hibernation’, and ensures that all modules within the system have the same lifecycles and share the same running time. They age together and for longer. This is specifically beneficial for installations where the load profile fluctuates on a regular basis. For example, one recent client only required two of the three UPS modules six months of the year in order to support the load and maintain redundancy. MEM functionality is helpful for the med-
24 www.datacentrereview.com Q3 2021
ical, business and financial sectors where the demand can change on a daily basis, influenced by peak and off-peak business hours. Even colos and data centres, where load demand is impacted by changes to client demand or contract changes, are able to utilise this technology to optimise their system efficiency on a continuous basis. This works well for systems with fluctuating loads, but what about where the load simply increases? How do clients know when to add more modules without unnecessarily oversizing? Making an early assessment about potential ‘growth spurts’ can help you make the right decision from day one when selecting UPS module sizes (e.g. 10 kW or 50 kW). This can keep the cost of adding incremental modules down, and once again prevent the need to oversize. Remember, when adding UPS modules you only need to purchase what you need for that stage of growth. Remote monitoring of the UPS with tools such as Simple Network Management Protocol (SNMP) is also important, as it is possible to see what is happening in real-time from any location, and it will alert you to any unexpected issues. Monitoring will also provide information on how the UPS is loaded and if it could be better balanced before the need to upgrade,
Making an early assessment about potential ‘growth spurts’ can help you make the right decision at day one when selecting UPS module sizes
optimising the capacity of the system. Close monitoring and regular service visits can also identify times when the UPS has gone into overload. In this situation, users will get an alarm which remains on until the overload has been resolved. All UPS systems are designed to run for a short period of time at overload capacity. However, if this happens regularly, it could be an indication that it may be time to add a module to maintain capacity or regain redundancy. A flexible system is integral to enable those ‘growth spurts’ supporting the journey to day two. For seamless integration into live systems, it is also necessary to ensure that the UPS has the ability to add UPS modules safely, with zero downtime. Being able to fully isolate and test new equipment within a live system (safehot-swap) before it accepts any load mitigates any potential faults before introducing it to the rest of the system. In a system without safe-
hot-swap, any issue with a module going into a live system could have catastrophic consequences, putting your load at risk. Careful management of power continuity can maximise power protection and minimise the total cost of ownership at the same time. At CENTIEL, our design team has been working at the forefront of technological development for many years. We are the trusted advisors to some of the world’s leading institutions. We have developed our pioneering 4th generation true modular UPS system CumulusPower, which offers industry-leading availability of 99.9999999% (nine, nines) through its Distributed Active Redundant Architecture (DARA), combined with a low total cost of ownership (TCO) through its Maximum Efficiency Management (MEM) and low losses of energy. For more information about our full range of UPS solutions or to request a free site survey please visit, www.centiel.co.uk.
Q3 2021 www.datacentrereview.com 25
Cooling a hyperscale With power densities among the hyperscalers continuing to snowball, how do operators go about meeting the cooling challenges of these evergrowing and varied power densities? Phil Smith, construction director UK at Vantage, tells us more.
26 www.datacentrereview.com Q3 2021
ower densities in the hyperscale data centre era are rising. Some racks are now pulling 60 kW or more and this trend will only continue with the growing demand for high performance computing (HPC), as well as for supporting new technologies such as artificial intelligence. In parallel with these challenges, there is the ongoing debate over the environmental impact of data centres, presenting considerable environmental, compliance and CSR responsibilities. Putting super-efficient cooling and energy management systems in place is therefore a top priority. However, modern fit-for-purpose facilities are ‘greener’ and increasingly efficient, despite the rise in compute demands. Best practice has necessitated real-time analysis and monitoring for optimising cooling systems and maintaining appropriate operating temperatures for IT assets, without fear of compromising performance and uptime. Central to this, and to maximising overall data centre energy efficiencies, are integrated energy monitoring and management platforms. An advanced system will save tens of thousands of pounds through reduced power costs and by minimising environmental impact. For cooling there are various options. One option is installing the
very latest predictive systems and utilising nano-cooling technologies. However, these may only be viable for new purpose-designed data centres rather than as retrofits in older ones. Harnessing climatically cooler locations which favour direct-air and evaporative techniques is another logical step, assuming such locations are viable when it comes to location accessibility, available power and connectivity.
Best practice has necessitated real-time analysis and monitoring for optimising cooling systems and maintaining appropriate operating temperatures for IT assets, without fear of compromising performance and uptime Taking Vantage’s 750,000 sqft hyperscale facility in South Wales as a working example, it must satisfy and future-proof highly varied customer requirements. From delivering standard 4 kW rack solutions up to 60 kW per rack and beyond – with resilience at a minimum of N+20%. The cooling solutions deployed intelligently determine the optimal mode of operation according to the dictates of the external ambient conditions and individual data hall requirements. This enables operation in free-cooling mode for most of the year, only providing supplementary cooling in times of elevated external ambient conditions. Cooling in close up On the 250,000 sqft ground floor, comprising 31 separate data halls drawing a total of 32 MW, a Stulz GE system is installed. The indoor unit has two cooling components, a direct expansion (DX) cooling coil and a free cooling coil. It utilises outdoor air for free-cooling in cooler months when the outside ambient air temperature is below 20°C, with indirect transfer via glycol water solution maintaining the vapour seal integrity of the data centre. The system automatically switches to free-cooling mode, where dry cooler fans run and cool the water to approximately 5°C above ambient temperature before it is pumped through the free cooling coil. In these cooler months depending on water temperature and/or heat load demands, the water can be used in ‘Mixed Mode’. In this mode the water is directed through both proportionally controlled valves and enables proportional free-cooling and water-cooled DX cooling to work together. Crucially, 25% Ethylene glycol is added to the water purely as an antifreeze to prevent the dry cooler from freezing when the outdoor ambient temperature is below zero. In warmer months when the external ambient temperature is above 20°C, the system operates as a water-cooled DX system and the refrigeration compressor rejects heat into the water via a plate heat exchange (PHX) condenser. The water is pumped to the Transtherm air blast
cooler where it is cooled, and the heat rejected to air. On the 250,000 sqft top floor, Vertiv EFC 450 indirect free cooling units are used for indirect free cooling, evaporative cooling and DX backup. There are 67 units providing 28.5 MW of cooling on an N+1 basis. These allow us to control the ingress of contaminants and humidity for ensuring sealed white space environments. Using this solution, real life PUEs of 1.13 are being achieved during IST testing at maximum load. The system works in three modes. During winter operation mode, return air from the data centre is cooled down, leveraging the heat exchange process with external cold air. There is no need to run the evaporative system and the fan speed is controlled by the external air temperature. In summer, the evaporative system must run to saturate the air. This enables the unit to cool the data centre air even with high external air temperatures. By saturating the air, the dry bulb temperature can be reduced. In the case of extreme external conditions, a Direct Expansion (DX) system is available to provide additional cooling. DX systems are sized to provide partial backup for the overall cooling load and are designed to provide maximum efficiency with minimum energy consumption. HPC environments However, the cooling required for highly dense and complex HPC platforms demands bespoke build and engineering skills to ensure highly targeted cooling. Simple computer room air conditioning (CRAC) or free-air cooling systems (such as swamp or adiabatic coolers) typically do not have the capabilities required. Furthermore, hot and cold aisle cooling systems are becoming inadequate for addressing the heat created by larger HPC environments. This places increased emphasis on ensuring there are on-site engineering personnel on hand with knowledge in designing, building and installing bespoke cooling systems, utilising, for example, bespoke direct liquid cooling. This allows highly efficient heat removal and avoids on board hot spots, removing the problems of high temperatures without using excessive air circulation, which is both expensive and noisy.
The cooling required for highly dense and complex HPC platforms demands bespoke build and engineering skills to ensure highly targeted cooling In summary, cooling efficiency has always been critical to data centre resilience and uptime, as well as for energy cost optimisation. But it now matters more than ever, even though next generation servers are capable of operating at higher temperatures than previous solutions. Looking to the future, with many conventional data centres consuming thousands of gallons of water a day, operators will be striving to optimise Water Usage Effectiveness (WUE) – not just PUE. One such initiative will be rainwater harvesting to lower the WUE on adiabatic-evaporative systems. There will also be growing innovation around on-site renewable energy generation for power not dedicated to cooling and operating servers, along with the use of process heat for all office and back of house environmental control – helping drive down facility PUE.
Q3 2021 www.datacentrereview.com 27
Blowing the roof off conventional “A gale of creative economic destruction is blowing the roof off the conventional data centre economic model, revealing something altogether different,” says Alan Beresford, managing director of data centre cooling experts EcoCooling. We asked Alan to elaborate upon this rather mysterious statement.
here’s a whole raft of new applications utilising larger proportions of the data centre sector; AI, IoT, cloud-based hosting and blockchain processing for over 1,300 different digital currencies is increasing the need for High Performance Computing (HPC) equipment. You will no doubt have heard about tech giants Facebook, Microsoft, Google and Amazon Web Services building hyperscale data facilities. These are a far cry from conventional data centres – a new breed based around efficient compute technologies, built specifically for the service each operator is providing. These hyperscale facilities have smashed conventional metrics. They achieve very high server utilisations, with PUEs (power usage effectiveness) as low as 1.05 to 1.07 – a million miles from the average 1.8-2.0 PUE across conventional centres. To achieve this, refrigeration-based cooling is avoided at every opportunity. Built on the back of new HPC applications (such as Bitcoin mining), smaller entrepreneurial setups have adopted these high efficiency, extreme engineering practices. They are no longer the preserve of traditional hyperscale facilities, thus turning the economics of data centre construction and operation on its head.
28 www.datacentrereview.com Q3 2021
Intensive computing CPU-based boxes are highly flexible and able to run applications on top of a single operating system like Unix or Windows. But being a relatively slow ‘jack of all trades’ means that they are ‘masters of none’ – unsuitable for HPC applications. The hyperscale centres use a variety of server technologies: GPU: Graphics Processor Unit servers based on the graphics cards originally designed for rendering. ASIC: Application Specific Integrated Circuits are super-efficient, with hardware optimised to do one specific job but cannot normally be reconfigured. The photo (pic 1) shows an AntMiner S9 ASIC which packs 1.5 kW of compute-power into a small ‘brick’. FPGA: Field-Programmable Gate Array. Unlike ASICs, they can be configured by the end user after manufacturing. Extreme engineering In the conventional enterprise or co-location data centre, you’ll see racks with power feeds of 16A or 32A (4 kW and 8 kW capacities respectively). Although the typical load is more like 2-3 kW. Conventional data centres are built with lots of resilience: A+B power, A+B comms, n+1, 2n or even 2n+1 systems for refrigeration-based cooling. Tier III and Tier IV on the Uptime Institute scale. What we’re seeing with these new hyperscale centres, however, is that HPC servers with densities of 75 kW per rack are regularly deployed – crazy levels on a massive scale. And there’s no Tier III or Tier IV, in fact there’s usually no redundancy at all, except maybe a little bit on comms. The cooling is just fresh air. Standard racks are not appropriate for this level of extreme engineering. Instead, there are walls of equipment up to 3.5m high – stretching as far as the eye can see.
Those of you who operate data centres will know that only about half the available power is actually used. But worse still, when we get down to the individual server level, some are down to single digit utilisation percentages. These guys squeeze all their assets as close to 100% as they can. Almost zero capital cost is spent on any forms of redundancy and direct fresh air is used for cooling. A new dawn: Prototyping to benefit you EcoCooling has supplied cooling solutions to one of the most ambitious and potentially significant hyperscale developments in Europe. The aim of the H2020 funded ‘Boden Type DC One’ project was to build a prototype of the most energy and cost-efficient data centre in the world. This created achievable standards so that people new to the market can put together a project which will be equally, if not more, efficient as those of the aforementioned giants, such as Amazon, Facebook and Google. We aim to build data centres at one tenth of the capital cost of a conventional data centre. Yes, one tenth. That will be a massive breakthrough. A true gale of creative economic destruction will hit the sector. One of the key components is a modular fresh-air cooling system. And we’re trying to break some cost and performance records here too.
The economics of hyperscale Whereas we all have our set of metrics for conventional data centres, the crypto guys have only one: TCO (total cost of ownership). This is a single measure that encompasses the build cost and depreciation, the costs of energy, infrastructure and staff. They express TCO in ‘cents (€) per kilowatt hour of installed equipment’. In the Nordics, they’re looking at just six to seven cents per kWh, down to around five cents (€) in China. However, all is not lost for operators in the UK and Europe. These servers and their data are very valuable – tens or hundreds of millions of pounds worth of equipment in each hyperscale data centre. As a result, we are already seeing facilities being built in higher-cost countries where equipment is more secure. But they still need to follow the same extreme engineering and TCO principles. Keep it simple You can’t build anything complicated for low cost. In this new hyperscale data world, it’s all about simplicity. Brownfield buildings are a great starting point. Particularly sites like former paper and textile mills where there tends to be lots of spare power and space.
AntMiner S9 ASIC
You can’t build anything complicated for low cost. In this new hyperscale data world, it’s all about simplicity Pioneers One of the early leaders in the Arctic data centres, Hydro66, uses extreme engineering. In their buildings, the air is completely changed every five to 10 seconds. It’s like a -40°C wind tunnel with the air moving through at 30 miles an hour. Importantly, you’ve got to look after all this highly expensive equipment. An ASIC AntMiner might cost 3,000 Euros. With 144 AntMiners in each of the racking units, that’s almost half a million Euros of hardware. So, we need to create ‘compliant’ environmental operating conditions to protect your investment. You’re probably familiar with the ASHRAE standards for temperature and humidity. In many instances we have achieved 100% ASHRAE compliance with just two mixing dampers, a low-energy fan and a filter. There is some very clever control software, vast amounts of experience and a few patents behind this too, of course. Hang onto your hats So, to conclude: The winds of change are blowing a gale of creative economic destruction through the conventional approach to data centres. Driven by Blockchain and Bitcoin mining, automation, AI and cloud-hosting, HPC equipment in the form of GPUs and ASICs will be required to drive the data-led economies of the future. Hyperscale compute factories are on the way in. TCOs of five to seven cents (€) per kWh of equipment are the achievable targets. This needs a radically different approach: extreme engineering, absolute simplicity, modularity and low-skill installation/maintenance. Hang on to your hats, it’s getting very windy in the data centre world.
Q3 2021 www.datacentrereview.com 29
Industry Insight: Could ‘used’ actually mean better? There has long been speculation that buying something previously owned carried risk, or that buying second hand negatively connoted the ‘cheap’, less reliable option. But, there is a new model in town that opens the door for ‘old’ IT equipment to be fully certified, warrantied, and generally as good as new. Here, ITRenew president, Ali Fenn redefines the meaning of ‘used’.
Perhaps an obvious question to begin, but in a traditional sense, how do data centre operators define when equipment is ‘used’, or ready for disposal? Data centre operators all make this judgement according to their own unique environments, equipment types, workloads, business growth and other factors. The range is generally from three to nine years, and the market is highly bifurcated. Generally speaking, hyperscale data centre operators refresh their primary workload equipment quickly – every three to four years – while the broader global enterprise and smaller cloud service provider segments trend toward longer timelines. The reason for this difference is the scale and growth of the former, which demands continuous optimisation of efficiency at the data centre level, and more frequently involves new form factors and architectures than would be justified by the radically slowed Moore’s Law curve, which drives efficiency at the system level. The gap here is what ITRenew is addressing via circularity, the opportunity to capture lifetime value and maximise longevity in the aggregate in order to create longer highly productive lifecycles for all data centre hardware. How would you define the concept of ‘circularity’ within IT? The concept of circularity in the IT sector advances the idea that the lifetimes of equipment are not linear. Instead, they should be thought of as cascading circles, or pathways, that keep compute and storage resources in their highest utility for as long as possible, and enable those assets to realise both maximum financial value and sustainability impact. Tangibly, this means cascading technology for secondary and tertiary use with as little remanufacturing and transformation as possible. How would the data centre industry (and the planet) benefit from a more circular approach? There are really two major value opportunities: sustainability and the ability to access and deploy more IT infrastructure at the most advantageous cost. ITRenew’s recent Life Cycle Analysis (LCA) of standard racks of data centre equipment found that in common manufacturing/deployment/end-of-life scenarios, as much as 75% of the total carbon impact of IT equipment can come from the manufacturing phase of life. This means that for every rack that is given a second life, that same amount of carbon is avoided due to the deferral of new manufacturing. With nearly 50 million servers expected to be decommissioned in the next three to four years, the aggregate impact potential for CO2 savings is massive. Secondly, circularity – as reflected in recertified solutions – enables significant Total Cost of Ownership (TCO) savings, often as much as 50%. This means that enterprises can do a lot more with less and make their budgets go further. From the broader social perspective, the cost savings of circular equipment stands to significantly accelerate progress toward universal digital access – a big target of the UN SDGs. Despite sustainability and climate change being prevalent in the data centre space for a while now, why do the majority of facilities continue to use a ‘make, use, dispose’ data centre model? What are the current barriers to a more circular concept? Two primary reasons. First, data centre operators have – rightly – been predominantly focused on improving the energy efficiency and use phase operational energy of their facilities. There is no question that lowering PUE, shifting to renewables, and leveraging carbon offset credits have had and will
Q3 2021 www.datacentrereview.com 31
continue to have a significant sustainability impact on the industry. Secondly, until recently, most IT in the hyperscale fleets was proprietary OEM solutions, and hence not suited for the recertification and reconfiguration required for circularity. However, with the more recent shift to ODM and open hardware solutions, it is now possible for ITRenew to do the necessary engineering, operational and services work to transform the hyperscale innovation into solutions that are both useful and usable by broader markets, and supportable by ITRenew.
The critical next step is to not just leverage the best available recycling services and technology, but to keep more material out of the waste streams all together. This is the goal of circularity Do you think the large hyperscalers and cloud providers have a responsibility to lead the way if this is to become the norm? Do smaller facilities have the resources to do this? The hyperscalers have all made significant public statements about achieving carbon neutral and carbon negative goals. To succeed and fully decarbonise, comprehensive work across the supply chain will be required – including changing how their hardware is designed, sourced, used and reused. The smaller guys have a different role to play – they need to seriously consider procuring circular equipment, both for the sake of climate change and their own bottom lines. When it comes to e-waste, how are data centres (in general) currently disposing of this? Is there a better way? There are actually well-established regulations and certification standards such as WEEE, ISO and R2 that govern the space and have helped to ensure that IT equipment generally flows downstream to the best possible recycling services providers and pathways. The critical next step is to not just leverage the best available recycling services and technology, but to keep more material out of the waste streams all together. This is the goal of circularity. Would you say within an IT environment there are more opportunities to recycle or upcycle? Or perhaps a mix of both? In the industry, we think about this in terms of internal reuse or reclaim, as opposed to external reuse or recycling. Opportunities for the former exist at both the component (think spares) and system level but are unfortunately minority programs due to the nature of changing architectures within most environments. Recycling, on the other hand, is a reasonable ‘catch-all’ to avoid landfills but should be considered the ‘least-worst’ alternative for want of a better phrase. The best strategy is to create solution level second life
opportunities for racks and servers and storage, hence deferring the need for recycling. What are the key advantages to reusing existing materials, both for a business and the planet? Globally, we use 1.7x the Earth’s resources every year and this is growing. We have mineral shortages across the board but especially in the electronics space where the industry has begun to turn to deep sea mining to bridge the gap. On the other end of the lifecycle, worldwide we create more than 50 million tonnes of e-waste annually and this number is headed to 75 million by 2025. Reuse of systems is an essential strategy on both fronts at planetary scale. Commercially, the cost savings and cultural value of recertified materials and products enable competitive advantage, and the supply chain adaptation helps with achievement of carbon goals. It is also well established that companies with strong ESG policies outperform in company valuation and financial performance. Do you believe a more sustainable approach in the data centre industry will play a key role in the UK achieving its net zero by 2050 target? It has to. The data centre industry is a major consumer of global energy and a contributor to GHG from Scope 1, 2 and 3 emissions. A holistic approach encompassing renewable energy, technological innovation to drive efficiency and circular supply chains is the only path to net zero. We mustn’t forget, data centres are operated by people, and a data centre operator from a younger generation, might for instance be more open to making sustainable changes. If I were an old school data centre operator, adamant that the ‘make, use, dispose’ method was still king, probably not overly concerned about the planet, what might you say to me to convince me otherwise? Are there any small changes I could make?
Worldwide we create more than 50 million tonnes of e-waste annually and this number is headed to 75m by 2025. Reuse of systems is an essential strategy on both fronts at planetary scale Fortunately, I haven’t encountered this much – by and large, all data centre operators are aware of the imperative and interested in the opportunity to take action. Doing so is harder, of course. My advice is to just start. Be willing to experiment, to try something new, don’t be beholden to the way it’s always worked, or the solutions you’ve always used. The public cloud disrupted everything. Everyone, even and especially in their own and cool environments, can now focus on the goal of maximum outcomes/$. Do this, and you will find your way to recertified, circular, sustainable equipment delivered without compromise.
Q3 2021 www.datacentrereview.com 33
Meeting the challenge
34 www.datacentrereview.com Q3 2021
The cybersecurity challenge is evolving, but there are a number of simple tips for a CISO to meet that challenge, as explained by Rob Allen, European director of marketing and technical services, Kingston Technology.
he last year may have meant significant changes to the workplace, but the need to prioritise business IT security has not gone anywhere. Instead, with more employees working from home, the breadth of potential security risks has only widened. With the rapid change to working environments, there’s less control over where, when and how your team handles the organisation’s data. Even before this situation, teams were already moving away from strictly fixed office roles. Business travel, mobile, freelance and coffee shop working all led to colleagues being spread out geographically, taking their devices and access to company data with them. And because we’re all human, those devices can and have been accidentally lost or stolen. In recent years, the potential penalties from failing to protect data have increased dramatically. Around a decade ago in the UK, headline news stories on data security were mostly focused on laptops being left in taxis or trains, leading to the loss of customer or client information. Now you also need to consider that GDPR legislation, which is intended to prevent misuse of EU citizens’ personal information, could lead to stiff penalties and significant fines – a maximum fine of €20 million or 4% of annual global turnover, per incident – if it is not observed. And although we may have left the EU, GDPR still applies to companies in the UK who continue to trade within Europe. Besides financial penalties, the next obvious risk of lax data security is with the loss of confidential data that may impact your organisation’s operations. Leaks of trade secrets, financial information or unannounced plans could do serious harm to your business. And lastly, data security is about reputation too. Suffering a breach due to poor security practices or mishandling data is bad PR for any company. Customers will be more likely to come to you if they think you can be trusted with their data. Regardless of whether or not you’ve already suffered from serious data loss, it’s therefore simply good practice to make sure you’re handling data in a secure manner. It’s not enough though to rely on the company’s best security expert – typically the CISO – to design a single top-down policy and expect it to solve these problems in one fell swoop. Nor is privacy an issue for the compliance department. Doing so is a sign your organisation is not taking these issues seriously enough. And this thinking trickles down from senior management to staff at lower levels, who probably will also not take security seriously. The result is that you’re more likely to suffer a breach or loss of data. For any CISO, the first (and potentially most difficult) stage of improving data security is by working to change the company mindset. Security is a combined collective and individual effort, and cannot be the sole responsibility of a single person. A change of culture is key, and the incentives are a better business practice, as well as avoiding financially crippling costs or fines if the problems lead to a serious data loss incident.
Use encryption Encryption is the main line of defence against data loss. If data is securely encrypted and the device it’s kept on is lost or physically accessed by a third party, they should not be able to read that data. You should be able to trust that encryption complexity. That’s why standards such as FIPS 140-2 (Federal Information Processing Standards) are commonly quoted, a tried and tested US standard for cryptography that certifies an encrypted device meets well-defined security standards. On Windows computers this can be activated by just enabling a simple option. However full disk encryption handled by the computer itself will result in a performance loss, as every file needs to be decrypted by the host computer when it is read and encrypted whenever it is written. A better solution is to opt for storage devices with built-in hardware-based encryption. In SSDs this is handled invisibly by the host computer, you enable the same option as you would for full disk encryption, but performance is drastically improved, with negligible waiting times. Moving data between work and the office usually relies on using removable USB storage. Opting for the cheapest off-the-shelf USB storage that you might use at home is a way to save a small amount of cash, but it’s better practice to opt for a storage device with built-in encryption. With removable storage, hardware encryption and decryption is handled by a special chip on the drive itself. It can happen invisibly, so the user does not have to remember to turn it on. If you opt for a solution that requires you to install software that needs you to enter a password to access the drive, beware of keyloggers and other malware that captures keyboard inputs. There are devices that rely on a virtual keyboard only in order to mitigate this risk.
Suffering a breach due to poor security practices or mishandling data is bad PR for any company Avoid sloppy data handling practices It’s all too easy to take shortcuts, especially when deadlines need to be met. But it’s these shortcuts that lead to data handling mistakes. Bad practice such as using personal email accounts for work purposes, reusing passwords or bypassing security measures are significant risk vectors. Educating staff about these practices and why they should be avoided is a crucial first step towards helping colleagues to start considering data security. Take no chances with data collection GDPR requires companies to collect data only with consent for a stated purpose. Be careful that you’re adhering to this, and data is not being collected (even unintentionally) if the user has not agreed to it. Develop a secure working culture Lastly, you should consider the overall working culture and whether it values security enough. Colleagues need to be able to highlight when their organisation is making security mistakes, rather than ignoring them and sweeping them under the carpet.
Q3 2021 www.datacentrereview.com 35
DataQube Global’s novel approach to edge data centres and 5G connectivity
ataQube Global, a disruptive start-up in the digital infrastructure market, is spearheading a move to improve broadband connectivity in rural areas by removing the need for fibre cabling at household level. This new approach from DataQube Global is set to catalyse the seamless handling of high-volume data in readiness of 5G going mainstream. The company, together with a consortium of Telcos, data centre experts and OSPs, has developed a self-contained solution, DataQube, that will enable near real-time data analysis and
device interconnectivity at the edge of the network in a robust, modular unit that is scalable according to requirements. The solution is also expected to improve broadband connectivity for rural communities, with field trials currently underway in a Cambridge 5G testbed project to ascertain this.
5G rollouts, along with IoT and machine learning, are expected to speed up breakthrough innovations and expand the deployment of autonomous tech, including UAVs (unmanned airborne vehicles), smart cities, shared transport networks and robotics The new telecoms network is also intended to reduce superfast broadband inequalities due to its ability to simultaneously support over one million devices per sq km, compared to the 60,000-odd devices of the current 4G networks. DataQube Global • email@example.com www.dataqube.global
4D Data Centres helps PeaSoup disrupt the UK cloud market by embracing immersion cooling technology
D Data Centres, a UK-based infrastructure management provider, has partnered up with PeaSoup, a pioneering UK cloud provider with a heritage of disrupting the cloud market by the use of innovative technologies. In this partnership, 4D is providing colocation services to host a highly energy-efficient ‘pod’ that uses immersion cooling technology. Both companies share a vision to provide the UK cloud and data centre markets with low carbon footprint solutions. This innovative immersion pod will be
colocated at 4D’s award-winning data centre near Gatwick airport. The deployment will enable PeaSoup to provide HPC cloud services from a Tier III green data centre, a service called the Eco Cloud. Jack Bedell-Pearce, CEO at 4D Data Centres commented, “Colocation data centres are the green option for eco-conscious businesses and 4D is committed to its responsibilities towards the environment. “Immersion cooling is a solution that fits perfectly with our sustainable strategy for a number of reasons. Aside from reducing risks of overheating, immersion cooling’s
efficiency means it is able to cool high-density computer systems without increased power consumption.” 4D Data Centres • 020 3962 0399 www.4d-dc.com
The world’s most scalable database now runs on any Kubernetes ataStax has announced that K8ssandra, an open-source distribution of Apache Cassandra on Kubernetes, is available on any Kubernetes environment, including distro-specific integrations for Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). In November 2020, DataStax released K8ssandra to the open-source community, where its cass-operator was selected by the Apache Cassandra community as the basis for developing a single, community-based operator with contributions from Orange and others. K8ssandra combines the flexible, cloud-native benefits of Kubernetes together with the
36 www.datacentrereview.com Q3 2021
global scale of Cassandra – the NoSQL database used by leading enterprises including Apple, Instagram, Netflix, Sky, Spotify, TikTok, Uber
and Yelp. It is uniquely positioned to provide a cloud-native database for modern data applications. Sam Ramji, chief strategy officer at DataStax commented, “It’s exciting to see the community engagement behind K8ssandra and the many other projects that are breaking down the barriers to running data on Kubernetes.” According to a 2020 CNCF survey, the use of containers in production has increased to 92%, up from 84% last year, and up 300% from 2016, and Kubernetes use in production has increased to 83%, up from 78% in 2019. DataStax • 020 3514 8402 www.datastax.com
Vantage Data Centers: Renewable energy now an option for all customers
antage Data Centers is now providing access to renewable energy options at each of its North American and European campuses to enable customers to reduce their carbon emissions. Additionally, the company has hired two experts to lead its environmental sustainability commitments across the globe. Four of the company’s campuses are currently powered by more than 99% renewable energy (hydro, tidal and wind) for critical IT load through Vantage’s utility partners, while the other campuses provide access to green power purchases and renewable energy credits
through local utility partners. Vantage actively works with energy providers, customers and industry groups to advocate and invest in additional renewable energy
options globally. As part of an expanded focus on environmental goals, the company named Amanda Sutton as senior director of sustainability to lead Vantage’s global sustainability program to lessen its environmental impacts worldwide. In addition, Neal Kalita will serve as the director of power and sustainability with a focus on the company’s European campuses. Sutton and Kalita join a global team overseeing environmentally responsible facility design, construction and operations. Vantage Data Centers • 0163 3988 021 www.vantage-dc.com
New Vertiv technologies to unlock a potential 1 million Euros in revenue for customers in Ireland
ertiv, a global provider of critical digital infrastructure and continuity solutions, has launched its first single-vendor solution to enable grid flexibility, power stability and demand management. The innovative Dynamic Grid Support feature for Vertiv Liebert EXL S1 uninterruptible power supply (UPS), when coupled with the Vertiv HPL lithium-ion battery cabinet, the complete system represents the first single-vendor integrated solution on the market combining grid support features, UPS and lithium-ion batteries.
The New Dynamic Grid Support feature combined with Vertiv UPS and lithium-ion battery technologies can unlock revenues approaching one million Euros per year for 10 MW of flexible load for customers in Ireland today, according to demand response leader Enel X. Available now across Europe, the Middle East and Africa (EMEA), Vertiv Liebert EXL S1 with Dynamic Grid Support feature allows energy-intensive industries to utilise UPS systems in a proactive way. Using the UPS with Dynamic Grid Support and lithium-ion batteries during times of grid instability can help secure the supply
of electricity by addressing the imbalance related to renewable power generation, in turn supporting global sustainability. Vertiv • +44 023 8061 0311 www.vertiv.com
Schneider unveils its Galaxy VL, the most compact 3-phase UPS in its class
chneider Electric has announced the global launch of the Galaxy VL 200-500 kW (400V/480V) 3-phase uninterruptible power supply (UPS), the newest addition to the Galaxy family. Available worldwide, this highly efficient, compact UPS offers up to 99% efficiency in ECOnversion mode for a full return on investment within two years (model dependent) for medium and large data centres and commercial and industrial facilities. With data centre floor space at a premium, the compact design of the Galaxy VL is half the size of the industry average at .8 m2.
Its modular and scalable architecture enables data centre professionals to scale power incrementally, from 200 kW to 500 kW with 50 kW power modules, providing flexibility to grow as their business demands. With Galaxy VL, Schneider Electric introduces Live Swap, a pioneering feature which delivers a touch-safe design throughout the process of adding or replacing the power modules while the UPS is online and fully operational, offering enhanced business continuity and no unscheduled downtime. Schneider Electric • 0870 608 8608 www.se.com
Q3 2021 www.datacentrereview.com 37
Stop testing, you might find something wrong Is a ‘zero testing’ environment the holy grail of software engineering, or a viable reality? Anbu Muppidathi, president & CEO (Designate) at Qualitest Group, elaborates.
he pinnacle of quality engineering (QE) is not finding bugs but eliminating any possibility for a bug. Quality issues kill the customer experience instantly and can eventually destroy a brand’s hard-earned equity – resurrecting a business’ brand is significantly costlier than fixing underlying technical issues. Hence, every company should adopt matured quality engineering processes early in the software engineering lifecycle, so that the guaranteed business outcomes are delivered consistently throughout the customer journey. The quality of software engineering processes can very easily be judged from the quality of the issues unearthed in end-of-the-lifecycle testing. If anything is found during testing, then there are opportunities for improvement. Predicting and avoiding issues should be the ultimate objective rather than finding and fixing issues. The field of quality engineering has advanced a lot more than most people imagine. When cutting edge Artificial Intelligence (AI), Machine Learning (ML) and analytics technologies are applied to test technology, there is a world of possibility. Today, ML and AI have paved ways for all kinds of prediction techniques, so that engineers can avoid potential quality issues as they develop software. Emerging methodologies have increased the release velocity of customer applications. Businesses worry about cost, quality, speed and customer experience in every one of these releases. Matured automation is key for continuous deployment – when the power of technology and deep domain knowledge come together in testing, effectiveness of quality goes up. Consistently measuring test effectiveness will ensure confidence in certifying the product is 100% error free, so that there is no necessity for end of the lifecycle testing. Such a ‘zero testing’ environment is the ultimate nirvana that the future of software engineering can achieve. But zero testing is the result of early and more frequent testing in the software development lifecycle. It is also a status that is achieved over a period of time after multiple iterations of release cycles.
Foundation for zero testing is built on the following principles: Agile maturity of the organisation As every company is modernising their IT, unlocking the value of these emerging technologies requires a fundamental shift in mindset and the
38 www.datacentrereview.com Q3 2021
adoption of an agile approach across the organisation. The model should enforce early inclusion of customers in the process so that the consumer persona can be assessed early in the lifecycle to avoid potential usability issues in the future. Studies prove that more than 85% of the issues can be avoided by conducting group tests before the product rollout. Effectiveness of enterprise architecture How applications are designed seriously impact quality engineering efficiency. A microservices architecture will help design a more sophisticated quality engineering model than a monolithic design. When automation goes more modular, coverage and leverage can be better administered. Advanced quality engineering discipline Automation and simulation are the quality-twins that help guarantee product quality. Automation preserves application knowledge reusability. Simulating potential users, data, devices, servers and usability scenarios using virtualisation technologies helps testing the application in possible scenarios. That’s the only way to check out the possibilities of all kinds of quality issues that could arise now during the development and later during the usage. Quality of ‘feedback’ to the system It is important to learn early on issues that (potentially will) go wrong. Each of the learnings should be fed back through the system so that testing becomes more and more efficient. Application of current knowledge and the ones that continue to be learned should be automatic. Manual dependencies aren’t going to help. Aligning the learning and automation leverage so that every iteration of development and testing gets better and better is crucial. Self-testing & self-healing Continuous quality can only be achieved through continuous testing. Embedding robotic components with the applications to digitise the quality gates to monitor, learn, self-test and self-heal, will help certify the quality for the life of the product. This requires investment in ML/AI in tests. As a quality engineering enthusiast, I would never advise not to test. But testing more and testing early, can help guarantee quality for the life of the product, so no one needs to test it again.
New event for data centre operators, customers and suppliers Tech, skills, security, sustainability and more The data centre industry has supported businesses and communities during the extraordinary events of 2020/2021 and their value can no longer be underestimated.
For commercial and advertising enquires, please contact:
This brand new event is designed for data centre operators, their enterprise customers, hyperscalers and big tech, and the vendors and specialists which supply the industry.
Sunny Nehru Group Account Director
We will be considering a range of business-critical issues including automation and emerging technologies (AI, edge computing, machine learning, IoT), addressing the skills gap, cybersecurity challenges, power & cooling, sustainability and net zero targets, and the impact of 5G and smart city evolution.
+44 (0)207 933 8974 firstname.lastname@example.org
Kelly Baker Account Manager +44 (0)20 7933 8970 email@example.com
In Partnership With
Find out more at: