DCR Q3 2022

Page 1

Q3 2022

www.datacentrereview.com

HOW RELIABLE IS YOUR BACKUP POWER? The critical role of regular load testing in data centres

www.crestchicloadbanks.com

22

Cybersecurity

Event Preview

Keeping up with the evolution of cybercrime

Get a Critical Insight into the sector at this brand-new event

Industry Insight Developing an effective sustainability strategy

VIRTUAL EVENT 22-23 NOVEMBER 2022

28

39



News 04 • Editor’s Comment Levelling up?

Contents

06 • News Latest news from the sector.

Features 12 • DCIM Schneider Electric’s Marc Garner explains why DCIM must evolve to meet the era of IT infrastructure everywhere.

18 • Cybersecurity Ross Brewer at AttackIQ unpacks how data-driven insight for CISOs and security team leaders can benefit the entire organisation.

28 • Event Preview Get a ‘Critical Insight’ into the data centre sector at a brandnew event, coming this November.

30 • UPS & Standby Power

12

Could locked-in, long-term UPS maintenance contracts be beneficial for data centres, asks Centiel’s Louis McGarry.

06

36 • Networking Alan Stewart Brown at Opengear explains why the network needs to be at the heart of digital transformation.

Regulars 39 • Industry Insight Developing and executing an effective sustainability strategy has become a top-tier imperative for operators of digital infrastructure, says Uptime Institute’s Andy Lawrence.

28

42 • Products Innovations worth watching.

VIRTUAL EVENT 22-23 NOVEMBER 2022

18 39

30


Editor’s Comment Levelling up? At the time of writing, the UK awaits with bated breath for the big reveal of our next Prime Minister. It’s been something of a fraught leadership race, after the even more fraught and somewhat baffling final death throes of Boris Johnson’s premiership. But at the start of this new(ish) chapter, I’m inclined to ask – will anything actually change? There seems to be so many plates spinning in the air at the moment, that we’re in a constant state of firefighting. In May, the DCMS called for views from the industry on how to increase the security and resilience of UK data centres and online cloud platforms as part of the government’s National Data Strategy. The government taking digital infrastructure into account in its ‘Levelling Up’ of the country is an acknowledgement of the importance of the sector and its role in making the UK a player on the global stage. But there’s so much work to be done. The sector is grappling with the need to become more environmentally friendly, while at the same time trying to meet an ever-increasing demand for capacity. All the while, home and hybrid working, new cybersecurity threats, demand for better connectivity, the adoption of AI – and of course that ubiquitous skills shortage – are hurdles along the way. These are the sorts of topics that will be discussed and debated at the inaugural Critical Insight. In association with DCR, this brand-new, two-day virtual event is headed to a screen near you this November. You can get a first look at the agenda and register for the event at critical-insight.co.uk – and keep your eyes peeled for new speaker and sponsor updates. Signing off with my usual refrain, you can reach out to me at kayleigh@datacentrereview.com, and don’t forget to join us on Twitter (@dcrmagazine) and on LinkedIn (Data Centre Review). Kayleigh Hutchins, Editor

EDITOR

Kayleigh Hutchins kayleigh@datacentrereview.com

CONTRIBUTING EDITOR

Jordan O’Brien jordano@sjpbusinessmedia.com

DESIGN & PRODUCTION

Alex Gold alexg@sjpbusinessmedia.com

GROUP ACCOUNT DIRECTOR

Sunny Nehru +44 (0) 207 062 2539 sunnyn@sjpbusinessmedia.com

ACCOUNT MANAGER

Kelly Baker +44 (0)207 0622534 kellyb@datacentrereview.com

GROUP COMMERCIAL DIRECTOR

Fidi Neophytou +44 (0) 7741 911302

fidin@sjpbusinessmedia.com

PUBLISHER

Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: subscriptions@electricalreview.co.uk SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register Electrical Review is published by

2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media

Follow us on Twitter @DCRmagazine

Join us on LinkedIn

4 www.datacentrereview.com Q3 2022



EDIT

News

EQUINIX SIGNS A SECOND PPA WITH NEOEN & PROKON

The latest highlights from all corners of the tech industry.

Image credit: rawf8 / Shutterstock.com Image credit: Romolo Tavani / Shutterstock.com

Climate Neutral Data Centre Pact proposes new water metrics

T

he Climate Neutral Data Centre Pact (CNDCP) has proposed new metrics for water conservation to the European Commission. Established in 2021, the CNDCP is a self-regulatory initiative currently backed by 74 data centre operators and 23 associations with the aim of climate neutrality. It holds progress meetings with members of the Directorate General for Communications Networks, Content and Technology (DG CNECT) and the Directorate General for the Environment (DG ENV) every six months, the third of which took place in June 2022. With water cooling an increasingly common approach to data centre cooling, the most recent meeting has seen the CNDCP propose a water use limit of 0.4 litres of water per Kilowatt-hour of compute power (0.4 l/kWh). According to the CNDCP, this metric takes into account the diverse range of technologies, climates and types of data centre building to ensure that it is technology and location neutral. The proposed metric will need to be achieved at data centres operated by pact signatories by 2040, taking into consideration the lifecycle of current cooling systems and the embedded carbon cost of early replacement. It also aims to prevent construction of any new data centres that would be unable to meet the agreed metric. The CNDCP has also established two new working groups to define targets for recycling and reuse as part of a circular economy, and to establish metrics for energy efficiency. Progress of these will be reported at the next update meeting, planned for November 2022.

Equinix has signed a power purchase agreement (PPA) with renewable energy producer Neoen and project developer Prokon for at least 42MW of wind power. The PPA will provide renewable energy for Equinix’s International Business Exchange (IBX) data centres in Finland. Under the 10-year agreement, Equinix will purchase 80% of the green energy set to be produced by the Lumivaara wind farm, which will comprise nine wind turbines and provide a total capacity of at least 53 MW. Construction of Lumivaara is scheduled to begin in 2023, with commissioning to follow in early 2025. Neoen owns 80% of the project, with Prokon, the original developer, owning the remaining 20%. This is the second PPA signed between Equinix, Neoen and Prokon, and is Neoen’s eighth corporate PPA in Finland since 2018.

NEW GOVERNMENT POLICY ISSUED ON DATA CENTRES IN IRELAND A revised ‘Statement on the Role of Data Centres in Ireland’s Enterprise Strategy’ has set out guidance for development of new facilities in the country. The policy states that, while new developments will not be banned in Ireland, a set of sustainability-focused regulations should be adhered to by potential developers. In a statement, the government said, “we must align the twin transitions which are both digital and green,” claiming that data centres are respon-

6 www.datacentrereview.com Q3 2022

sible for about 14% of Irish electricity use. The new guidance “adopts a set of principles to harness the economic and societal benefits that data centres bring, facilitating sustainable data centre development that adheres to our energy and enterprise policy objectives.” The statement outlines several new principles, encouraging developments with strong economic activity and employment; efficient energy use, using available capacity and alleviating constraints; adoption of renewable

energy; colocation with a renewable generation facility or storage facility; decarbonated design; and the provision of community engagement. New data centre developments not adhering to the guidance “would not be in line with government policy.” This follows previous concerns raised about the effect of data centres on the country’s energy grid, with EirGrid previously halting plans for a number of potential data centre sites.



NEWS

PROXIMITY OPENS EDGE COLO DATA CENTRE IN BRISTOL

Image credit: Proximity Data Centres

Proximity Data Centres has opened its latest facility, Proximity Edge 9, in Bristol. Proximity Edge 9 comprises a 90,000 sqft facility built to Tier-III standards, offering 4MW of capacity with the potential to expand to 20MW. Proximity Edge 9 is the latest in Proximity’s expanding network of regional edge data centres which currently covers the North, North West, Midlands, Thames Valley, South West and South Wales. According to the company, within the next 12 to 18 months, a total of 20 sites will be available nationwide.

Image credit: Gorodenkoff / Shutterstock.com

Iceotope secures £30m funding from global syndicate ceotope Technologies has won a £30 million funding round from a group of global investors led by Singapore impact private equity firm ABC Impact. The aim of the funding is to drive the reduction of carbon emissions produced by the data centre industry, and includes investment from nVent, SDCL Energy Efficiency Income Trust plc, Northern Gritstone, British Patient Capital, Pavilion Capital, and existing investor, Edinv.

I

Image credit: Green Mountain

nVent will also offer new modular integrated solutions for data centres, edge facilities, and high-performance computing applications as part of a strategic alliance with Iceotope, and support a rapid deployment of Iceotope’s technology through its suite of data centre solutions. According to Iceotope, its precision immersion cooling solutions offer up to 96% water reduction, up to 40% power reduction, and up to 40% carbon emissions reduction per kW of ITE.

KOHLER MAKES GENERATORS HVO FUEL COMPATIBLE

Image credit: Kohler

Green Mountain acquires data centre in London Green Mountain has acquired a data centre and the land adjacent to the site in Romford, East London. Azrieli Group, the parent company of Norway-based Green Mountain, signed a Share Purchase Agreement for the acquisition of the properties owned by Infinity SDC. The site, currently capable of supporting up to 40MW of IT load, will be upgraded and re-branded as Green Mountain, and will be the company’s first site of operation outside of Norway. According to Green Mountain, it has plans to expand service capacity while also modernising the facility to be in-line with its sustainability commitments. The site is already supported by 100% renewable energy. Completion of the acquisition is subject to certain conditions and is expected to take place before the end of the year. CBRE represented Infinity SDC in its capacity as sole financial advisor.

8 www.datacentrereview.com Q3 2022

Kohler Power Systems has announced that its entire range of diesel generators are able to run on Hydrotreated Vegetable Oil (HVO). Kohler said in a statement that no adaptations to installed generators are needed, with HVO and fossil diesel sharing similar characteristics that allow the two fuels to be mixed directly in the tank without issue. As a result, HVO can be used immediately as the sole fuel supply for all Kohler diesel generators, allowing for the immediate rollout of renewable fuel to customers. HVO renewable fuel is highly stable, with no sensitivity to oxidation, so it can be stored longterm. It is also 90% carbon neutral and sourced entirely from waste products.



SPONSORED FEATURE

How reliable is your backup power? The critical role of regular load testing in data centres Paul Brickman, Commercial Director at Crestchic, a specialist manufacturer of load banks, explains the importance of regularly testing a backup power system and asks whether data centres are doing enough to help mitigate the risks of power outages.

ost, frequency, and severity of data centre downtime is increasing, according to extensive research from the Uptime Institute. Recently published data revealed that one in five organisations reported suffering a ‘serious’ or ‘severe’ outage in the past three years – a ‘slight upward trend in the prevalence of major outages’. More than 60% of failures now result in losses of at least $100,000 in total, up from 39% in 2019. The share of outages that cost more than $1 million also increased from 11% to 15% over that same period. Data from the Institute’s 2020 global survey points to on-site power failures as the biggest cause of ‘significant outages’. Maintaining uptime has long been a primary concern for data centre engineers. Consequently, most data centres have backup power in place, designed to ‘kick in’ should the primary power source fail – a positive step for any mission-critical environment. Less positive is the lack of regular testing taking place in facilities with backup power, meaning risk is still present should the primary power source break down.

C

Testing UPS systems, backup generators and server heat-load testing Where unplanned downtime is likely to be costly or even devastating to a business’ financial stability, having backup power such as a generator is crucial. Wherever power is generated, there is also a need for a load bank – a device that is used to create an electrical load that imitates the operational or ‘real’ load that a generator would use in normal operational conditions.

10 www.datacentrereview.com Q3 2022

In any data centre environment, load banks may have multiple roles, from resistive-only load banks typically up to 300kW for heat load testing; rack-mounted server emulators for heat load testing; capacitive load banks to test with the leading power factor often associated with servers; multi-megawatt, medium voltage load bank packages to test and synchronise multi-genset systems on a common bus with a lagging power factor; ƒ DC load banks to test UPS systems for close battery analysis and discharge performance; and ƒ resistive-reactive load banks for testing the whole system operation in an emergency change-over scenario. In short, a load bank is used to test, support, or protect a critical backup power source and ensure that it is fit for purpose in the event that it is called upon.

What does good look like? It’s no surprise that the data centre sector’s reliance on backup power is on the up. The onus is often on the site manager or maintenance teams to ensure the equipment that provides this power is reliable, well-maintained, and fit for purpose. However, there still remains an astonishing number of data centres that fail to regularly test their backup power system, despite it lying dormant for the majority of the year. Instead, they put their trust in fate, hoping that should a power outage occur, the backup system will activate without fail. With the cost of downtime rising, data centres need to ensure their mission-critical power systems are fully tested. Why factory testing is not enough UPS systems and backup generators are typically tested at the factory as part of the manufacturing and quality testing process. However, on-site climatic conditions such as temperature and humidity often vary between locations. These variations in environment, combined with the impact of lifting, moving and transporting sensitive equipment, can mean that the manufacturer-verified testing may be affected by on-site conditions or even human intervention during installation. For this reason, it is absolutely critical that backup power systems are commissioned accurately and tested in-situ in actual site conditions using a load bank. Backup power testing best practice A robust and proactive approach to the maintenance and testing of the power system is crucial to mitigate the risk of failure. However, it is vital that this doesn’t become a tick-box


SPONSORED FEATURE

exercise. Implementing a testing regime that validates the reliability and performance of backup power must be done under the types of loads found in real operational conditions. Ideally, all generators should be tested annually for real-world emergency conditions using a resistive-reactive 0.8pf load bank. Best practice dictates that all gensets (where there are multiple) should be run in a synchronised state, ideally for eight hours but for a minimum of three. Where a reactive-only load bank is used, testing should be carried out multiple times per year at three hours per test. In carrying out this testing and maintenance, fuel, exhaust and cooling systems and alternator insulation resistance are effectively tested and system issues can be uncovered in a safe, controlled manner without the cost of major failure or unplanned downtime. Why is resistive-reactive the best approach? Capable of testing both resistive and reactive loads, this type of load bank provides a much clearer picture of how well an entire system will

It is absolutely critical that backup power systems are commissioned accurately and tested in-situ in actual site conditions using a load bank

withstand changes in load pattern while experiencing the level of power that would typically be encountered under real operational conditions. The inductive loads used in resistive/reactive testing will show how a system will cope with a voltage drop in its regulator. This is particularly important in any application which requires generators to be operated in parallel, where a problem with one generator could prevent other system generators from working properly or even failing to operate entirely. This is something which is simply not achievable with resistive-only testing. Secure your power source The importance of testing is being clearly recognised in many new data centres, with the

installation of load banks often being specified at the design stage rather than being added retrospectively. Given that the cost of a load bank is typically only a fraction of that of the systems which it supports, this makes sound commercial sense and enables a preventative maintenance regime, based on regular and rigorous testing and reporting, to be put in place from day one. By adopting a proactive testing regime, data centres can take preventative action towards mitigating the catastrophic risk associated with power loss. For more information on using a load bank to ensure power resilience for your facility, visit www.crestchicloadbanks.com or speak to our team on +44 (0)1283 531

Q3 2022 www.datacentrereview.com 11


DCIM

The next evolution Marc Garner, VP, Secure Power Division, Schneider Electric UK and Ireland, explains why DCIM must evolve to meet the era of IT infrastructure everywhere.

12 www.datacentrereview.com Q3 2022

T

oday our dependency on digital infrastructure shows no sign of abating. Driven by factors such as the proliferation of smart devices, the emerging availability of 5G networks, and the growth of the Internet of Things (IoT), the volume of digital information surging across the digital economy continues to increase at a rapid rate. Little of this data is permanently stored on phones, PCs or IoT devices. On the contrary, it is stored in data centres and, in many cases, accessed remotely. Given the always-on nature of the digital world, it is essential that such data centres are secure, sustainable, and resilient, providing 24/7 accessibility to data. Gartner predicts that by 2025, 75% of enterprise-generated data will be created and processed outside a centralised data centre or cloud. The demands of hybrid IT have required data centres to undergo significant evolution in terms of design, deployment and operations. For instance, today hyperscale data centres endure, but requirements for low latency connectivity and data availability for use in TV, streaming, social media and gaming platforms has driven more data centres to the edge of the network. Additionally, the concerns of data sovereignty, security, location and privacy — added to the need for businesses to react quickly to emerging market opportunities — have produced a plethora of new data centre architectures, many of which are smaller, more distributed and with the attendant problems of uptime, remote management and maintenance.


DCIM

The evolution of management software From the earliest days of digitisation, software has been used to monitor and manage digital infrastructure. Today, we describe such software as Data Centre Infrastructure Management (DCIM), and in reality, we have reached the third generation of this technology. In the 1980s, and at the dawn of the server era, the infrastructure needed to provide resilience and continuity to hosted applications consisted of little more than racks and uninterruptible power supplies (UPS) with rudimentary tools to monitor such systems and alert users in the event of a power outage. Such tools were not called DCIM at the time, but were effectively the first examples of the category. With hindsight, we can refer to them as DCIM 1.0. In the 1990s, the heyday of the dot.com era spurred the growth of larger data centres and cloud-distributed software. The industry chose to consolidate core IT infrastructure in purpose-built data centres which brought a new set of management challenges. These included more reliable cooling of high-density racks, managing space effectively and keeping energy costs to a minimum. The latter issue in particular forced operators to pay greater attention to efficiency and forced the development of metrics such as power usage effectiveness (PUE) to benchmark these efforts. In light of this, management software evolved into a phase we can call DCIM 2.0. Here, the monitoring of performance data from numerous infrastructure components including racks, power distribution units (PDU), cooling equipment and UPS, was used to provide insights to decision-makers, whereby data centres could be designed, built or even modernised for greater efficiency and reliability. Space utilisation was also a key challenge addressed, as were managing vulnerabilities with diligent planning, modeling and reporting to ensure resiliency. Such tools were mainly focused on large data centres, containing highly integrated and consolidated equipment typically from a handful of vendors. These data centres were likely to have on-site personnel and IT management professionals with highly formalised security procedures. Importantly, the software was typically hosted on premises and frequently, on proprietary hardware. The era of DCIM 3.0 With the emergence of hybrid IT and edge computing, data centre software has had to evolve again to meet the new challenges posed to owners, operators and CIOs. HPE states that while in 2000, enterprise software was entirely hosted at the core of the network, by 2025, 20% of IT will be hosted in the core, 30% in the public cloud, and 50% at the edge. For those in the era of infrastructure everywhere, it’s clear to see that the data centre environment has become increasingly complex and difficult to manage. One might even consider everything has become a data centre, in-part. New research from IDC found the chief concerns of edge deployments were managing the infrastructure at scale, securing remote edge facilities and finding the suitable space, with attendant facilities to ensure resilience and security at the edge. Moreover, between 2014 and 2021 there was a 40% increase in companies that have been compromised by a cyberattack. The pandemic, for example, forced people to work remotely and brought things into sharp focus. Now the data centre itself is not the only critical point in the ecosystem. One’s home router, PC, or an enterprise

network closet is as mission-critical a link in the chain as a cloud data centre, with its strict security regime and redundancy. For many senior decision makers, managing energy at distributed sites is also going to be a bigger challenge than in traditional data centres and Schneider Electric estimates that by 2040, total data centre energy consumption will be 2,700 TWh, with 60% coming from distributed sites and 40% from data centres.

With the emergence of hybrid IT and edge computing, data centre software has had to evolve again to meet the new challenges posed to owners, operators and CIOs Resilience and sustainability Today, distributed mission-critical environments need the same levels of security, resilience and efficiency across all points of the network. To realise this, a new iteration of management software is required, which we can call DCIM 3.0. Recognising that the role of Chief Information Officer (CIO) in many companies has become increasingly business focused and strategic, DCIM 3.0 will equip these decision-makers with insights into strategic issues — including where technology can best be deployed, how efficiently and sustainably it can be operated, and how it can be managed remotely, without loss of resilience. In some respects, this requires a greater use of artificial intelligence and machine learning to glean actionable information from the data amassed by IoT sensors. It will also require greater standardisation of both software tools and hardware assets to offer ease of management and faster speed of deployment. Further, increased customisation and integration will be key to making the hybrid IT environment resilient, secure, and sustainable. Customers also seek to deploy management tools in several ways. Some demand on-premises deployments, others insist on private cloud implementations, whereas others are happy to trust the public cloud. All methods must be supported to make DCIM 3.0 a reality. Ultimately, the issue of environmental sustainability will become increasingly important due to customer demand and government regulation. As well as operational data, DCIM 3.0 tools will have to support decisions such as how to source power from renewable sources, how to dispose of end-of-life products and how to manage the overall carbon footprint, of not just the IT infrastructure, but the enterprise as a whole. Right now, DCIM 3.0 is still in its infancy, although many of the above capabilities are already available. To deliver on the promise of DCIM 3.0, however, we must learn the lessons of the past and evolve DCIM to support a new generation of resilient, secure, and sustainable data centres.

Q3 2022 www.datacentrereview.com 13


DCIM

Alive and kicking? Nick Ewing, Managing Director at EfficiencyIT, asks – are reports of DCIM software’s demise exaggerated?

W

hat ever happened to data centre infrastructure management (DCIM)? Rewind the clock 10 years and DCIM was touted as the next big thing – a universal panacea for many of the data centre industry’s most pressing challenges. In fact, Gartner predicted in 2013 that within five years, DCIM would be the latest major technology and vendor opportunity disrupting our industry. Analysts and commentators claimed it would streamline operational efficiency, help end-users monitor and reduce energy consumption and maximise reliability – all while providing a tangible return on investment (ROI) and the ability to manage large, disaggregated IT portfolios with ease. Sadly, however, DCIM failed to live up to its earliest expectations, the

14 www.datacentrereview.com Q3 2022

hype curve flatlined and its breakthrough failed to materialise. While DCIM has proved a raging success for some data centre managers, it has unfortunately failed to meet the expectations of others, and where some found major benefits, others felt it was a wasted investment. There’s also been some major consolidation and change within the vendor space. For example, last autumn Vertiv, whose Aperture asset management software was once one of the most widely-adopted DCIM products, announced it was discontinuing its Trellis platform. While its rival, Nlyte, was acquired by Carrier, a specialist in cooling equipment. The changing market dynamics haven’t helped to build confidence in the capabilities of DCIM, and the perception is that leading platforms are disappearing from the market. Add to this news that support for existing Trellis contracts will end in 2023, many data centre stakeholders have been left feeling bewildered. Could it be, in fact, that DCIM has become an overblown luxury that most organisations can’t afford or don’t need? As is always the case, the truth may be a little more nuanced. Breathing new life into DCIM Luckily for DCIM fans, recent investments in user experience, data science, machine learning and remote monitoring have begun to breathe new life into DCIM. And while many data centre operators see its key


DCIM

strengths in monitoring and management, DCIM has proved itself invaluable during the pandemic and it will continue to add significant value as hybrid working models persist, especially where accessibility, visibility and continuity remain challenges for our industry’s key workers. The demand for DCIM’s capabilities certainly hasn’t disappeared. The 2021 Uptime Institute annual data centre survey revealed that 76% of operators felt their latest downtime incident could have been prevented with better management, processes or configuration. We have also seen increased demand for DCIM platforms offering simple installation, intuitive ease of use and real-time data-driven insight among our customers. End-users’ ESG requirements – especially in the colocation space around environmental impact, sustainability and energy efficiency – have also increased in importance since DCIM first appeared on the scene. A Schneider Electric report with 451 Research revealed 97% of customers globally were demanding contractual commitments to sustainability. Monitoring, measurement, and management software is of course critical to an organisation’s sustainability efforts. However, the grand expectations that DCIM alone would spearhead major efforts throughout the industry to improve energy efficiency and sustainability have yet to be realised. As with many technologies, implementation remains critical, but sadly this has often been DCIM’s Achilles’ heel. For a DCIM implementation to be successful it is necessary for vendors and end-users to: • Take the time to thoroughly understand the business case • Help the customer deploy the software • Ensure all assets are monitored correctly • Benchmark the DCIM solution’s progress. Regardless of how important successful implementation may be, it is often beyond the reach of many legacy operators, who continue to struggle with finding the necessary talent due to the widening industry skills gap. There’s also the procurement cycle to address, which requires multiple stakeholders. The responsibility for managing data centre infrastructure, even those typically addressed via DCIM tools, sits between IT, facilities, and M&E departments – often with different objectives and chains of command. Finding the right person to sign-off on a new DCIM project, or even identifying the right group of people to first agree to its use, was once a challenge. Luckily the business case is changing, and while the first versions of DCIM required considerable time and effort in terms of customisation, the newer, or next-generation versions, can simplify the process significantly, bringing siloed teams together. The dawn of a new DCIM era? DCIM may have failed to live up to the initial industry hype, but any reports of its demise are exaggerated, and with the advent of DCIM 3.0, things are quickly changing. The need, however, remains for software tools to efficiently manage the various functions of a data centre, no matter the type. And the capabilities of those versions deployed over the cloud allow businesses of all sizes to identify what their assets are, where they’re located, and how well they are performing. They can also proactively identify any status or security issues that need to be addressed. Further, any company that subscribes to ISO 27001, the global standard for IT security, must be able to track its assets and the people who have access and control to those assets. As such, cloud-based DCIM deployments can offer major benefits and allow distributed assets to be

monitored and managed at relatively low cost. Another critical concern is minimising downtime. Here, a vendor-agnostic DCIM platform can provide insights into all key power paths, especially if they comprise equipment from multiple manufacturers. The ability to track dependencies, to minimise potential risks to a mission-critical environment from a single piece of equipment, such as a power distribution unit (PDU), uninterruptible power supply (UPS) or a cooling system, can be identified and potential outages mitigated. It also remains essential for DCIM software to interact with legacy systems, facilities management suites, IT and network management software. This is best achieved through use of application programming interfaces (APIs) that allow high-level information exchanges between disparate tools.

A vendor-agnostic DCIM platform can provide insights into all key power paths, especially if they comprise equipment from multiple manufacturers Some analysts have opined that a particular weakness of Vertiv’s soon-to-be-discontinued Trellis platform was its dependence on Oracle Fusion application development tools, which tended to limit its attractiveness to customers outside of Oracle’s environment. The fact remains, however, that in a world full of distributed data centres, interoperability is essential for all management tools. Expensive luxury or must-have management solution? Measuring return on investment (ROI) is the key to establishing whether DCIM is a good fit for your organisation. Some may say it’s still an expensive overhead and it’s difficult to quantify the benefits when you utilise hardware assets from multiple manufacturers, but vendor-agnostic monitoring capabilities can quickly address that barrier. Calculating ROI could involve quantifying the reduction in downtime since a software platform was adopted. There is also the reduction in reputational damage and associated costs that may have impacted your business. Another alternative could be to calculate the reduction in power consumption, improved cooling efficiency and thereby reduced PUE. At EfficiencyIT we’ve always championed DCIM, regardless of the industry hype. For us, it has never been a miracle cure for the industry’s management challenges, and we see it as a valuable tool, which requires careful customer consultation and implementation if end-users are to gain the best results. With new investments being made all the time into data science and machine learning capabilities, we’re confident that finding an ROI is far simpler than end-users realise. However, the most immediate and obvious benefit is DCIM’s ability to provide real-time visibility, which is pivotal as we transition towards a greener, more sustainable, and more digitally dependent future.

Q3 2022 www.datacentrereview.com 15


SPONSORED FEATURE

How can technology support data centres through a global skills shortage? In this Q&A, Danel Turk, Data Centre Portfolio Manager at ABB, discusses the impact of the global skills shortage on data centre development and operation and highlights key technologies which are helping operators to navigate the challenges posed.

16 www.datacentrereview.com Q3 2022


SPONSORED FEATURE

A lack of skilled workers and contractors seems to be problematic for a number of industries. How big is the issue in the data centre sector? Our customers are telling us that the skills gap is one of the biggest issues they are facing right now, and it’s impacting everything from the construction of new data centres to daily operations and maintenance, and their supply chains. At the end of last year, we carried out some research in the Asia Pacific region, and almost three out of four operators (74.2%) said that access to specialist sub-contractors and trades was their greatest area of concern, after supply chain resilience (82.2%) and health and safety precautions (77.3%). The concerns raised in our Asia Pacific survey mirrors trends in Europe, where research reveals that 42% of data centre operators believe there’s not enough skilled labour to deliver increased capacity requirements across the continent. Over 80% of European companies say they have been affected by labour gaps and more than seven out of 10 believe the pandemic has made the industry’s skills shortages worse. While the shortage of suitably skilled people has been an industry issue for many years, the problems are really starting to bite now, with operators experiencing problems such as extra costs and delays in project delivery times. Operators need solutions – and fast. With growing global demand for data and the need to increase capacity quickly, the industry will be keen to avoid extended project delivery times. How can technology help data centre operators get new capacity built faster and work around the shortage of construction labourers and skilled subcontractors? The shortage of construction labourers and trades is being felt around the world, and this is driving a shift in the way data centre capacity is built, as traditional ‘stick built’ designs give way to modular and prefabricated building technology. Prefabricated products – such as eHouses and skids – are built off-site and factory-tested before being delivered to site as an integrated solution which can then be installed and commissioned quickly and efficiently. Modular electrification solutions are flexible and scalable, and incorporate standard blocks of power which can be repeated to allow for future expansion. Our research suggests that modular, scalable

Ultimately using prefabricated, predesigned modular solutions means additional capacity can be built quicker and with less people equipment can reduce build completion time by as much as 50% compared to traditionally built data centres and they can help negate labour shortages in three ways; firstly, a prefabricated solution is resource efficient from an operational point of view, as it requires one project manager dealing with one vendor. Secondly, the products are pre-engineered to spec by the manufacturer and pre-tested before leaving the factory so there’s less need for specialist consultants to design the system or engineers to troubleshoot issues on-site. Thirdly, some manufacturers offer installation and commissioning services for their prefab products, therefore there’s no need for the operator to find their own skilled subcontractor to do the job. It’s worth noting that digitalised solutions are also quicker to deploy as they require less wiring and less time to assemble on-site than traditional switchgear. Ultimately using prefabricated, predesigned modular solutions means additional capacity can be built quicker and with less people. Some reports suggest that the industry needs to recruit another 300,000 people over the next three years to run the world’s data centres. While that recruitment drive goes on, is there a way of reducing the impact of staff shortages on operations and maintenance activities? Yes – and again, digitalisation is a key enabler. By proactively monitoring data centre equipment and performance using smart maintenance technology, digitalisation moves operators away from the traditional calendar-based maintenance schedules to a predictive maintenance approach, focusing on the most critical maintenance activities. Preventative maintenance is a more efficient use of time for data centre engineers who are based on-site. The intelligence and analysis digitalisation also streamlines the running of the data centre for operational workers and helps keep it performing efficiently. This maximises employees’ time and is particularly helpful for under-resourced teams. Digitalisation is also being used by companies like ABB to support data centres with

remote maintenance solutions. Last year, we launched two augmented reality (AR) tools to help empower site engineers. CLOSER (Collaborative Operations for electrical systems) is the first port of call. It’s an app which provides fast and easily accessible guidance through an AR-based troubleshooting guide. If further assistance is needed, or should critical components need to be replaced, site engineers can connect directly with an ABB technical expert through RAISE (Remote Assistance for electrical systems). RAISE allows the field operator and an ABB expert to share a live video connection and use extended reality features, such as digital overlays (like arrows or symbols), in the field of view to give instructions or guidance. RAISE allows users to take and share pictures, audio and video, and guidance can also be given via live text chat.

Preventative maintenance is a more efficient use of time for data centre engineers who are based on-site

Can advances in technology support data centres facing supply chain issues too? To some extent, I think it can as the manufacturing process is faster. Prefabricated, modular build solutions use a standardised design and this speeds up the purchasing and manufacturing processes, and makes deliveries for standard solutions much faster. Digitalisation, which uses less wires and connections, and configurators also expedite the ordering, manufacturing and delivery process. With modular designs, there is the option of scalability too. This can support operators to negotiate supply chain issues as they don’t need to have everything for their complete build on-site on day one – it can be brought online a section at a time, and added to, which helps smooth out supply chain snags.

Q3 2022 www.datacentrereview.com 17


CYBERSECURITY

Optimising the CISO Ross Brewer, Vice President and General Manager of EMEA and APJ for AttackIQ, unpacks how datadriven insight for CISOs and security team leaders can benefit the entire organisation.

Automated security control validation can leverage new threat intelligence about adversary tactics, techniques, and procedures (TTPs) through knowledge-based frameworks such as MITRE ATT&CK

18 www.datacentrereview.com Q3 2022

A

ll organisations face varying degrees of cyber-threat in an increasingly digitised world. In fact, there were over 300 million ransomware attacks recorded in the first half of last year. To mitigate these threats, the Chief Information Security Officer (CISO) is tasked with securing their organisation against breaches perpetrated by bad actors. However, nearly half of all UK businesses experienced a successful breach during the pandemic, and cybersecurity incidents rose by a staggering 600%. The threat landscape is expanding, but as innovation in the cybersecurity space affords new opportunities for the industry, businesses should be savvier than ever when choosing how to secure their infrastructure and seek to transition from a reactive, to a proactive, threat-informed defence. Creating a threat-informed defence Organisations across the UK are spending heavily on cybersecurity, with medium and large businesses in the UK alone spending over £800 million on their defence in 2021. However, a study by PurpleSec found that 75% of companies infected with ransomware were running up-todate protection, meaning that organisations investing large amounts of funding into their cybersecurity programme are not tackling the real problem: testing and validating the controls they already have. According to the 2021 Verizon Data Breach Investigations Report, CISOs have an average of over 70 security controls at their disposal, up from 45 just four years ago – but with controls failing often and silently, they cannot be validated if they are not continually tested. A multitude of budgetary cybersecurity solutions exist, but with the global average cost of data breaches reaching over £3 million in 2021, organisations must configure comprehensive cybersecurity solutions that can effectively remediate real-world threats. An illustration of this is the HAVEX strain of malware, reportedly used by the Russian government to target the energy grid. Companies should be running attack graphs that emulate these known threats end-to-end to bolster their cybersecurity preparedness in the event of an attack. To counter these sophisticated threats, using automation to test organisations’ security controls continuously, and at scale in production, is the key to unlocking a threat-informed defence. Automated security control validation can leverage new threat intelligence about adversary tactics, techniques, and procedures (TTPs) through knowledge-based frameworks such as MITRE ATT&CK. This strategy allows for the deployment of assessments and adversary emulations against their security controls at scale, enhancing visibility by enabling organisations to view performance data continually, and allowing them to track how effective their security programme is performing. Organisations aiming to successfully achieve a threat-informed defence should put Breach-And-Attack Simulation (BAS) systems at the centre of their cybersecurity strategy. A good BAS platform uses the MITRE ATT&CK framework to enhance, assess and test threat detection and threat hunting efforts by simulating real-world behaviours. Through the performance data gained from continual security control testing, CISOs and their teams gain visibility into the efficiency of their


CYBERSECURITY

cybersecurity programme, and can more accurately report their findings to the board. Cybersecurity cannot live in a silo Co-operation and data sharing are also crucial tools for mitigating control failures, and creating a threat-informed defence. Traditional security team structures use threat-focused red teams and defence-focused blue teams to test security controls in tandem. However, teams often work in silos, and exercises are typically only performed once or twice a year, insufficient for a rapidly-changing threat landscape. A relatively new security team structure is purple teaming, where testing is aligned through a shared view of the threat, and the systems that they are supposed to defend. Purple teaming combines red and blue teams to run adversary testing against an organisation’s most important controls by understanding which controls are most likely to impact an organisation’s operations. Successful purple teaming has a goal of sharing performance data after the exercise is complete, which transforms a traditionally siloed structure into a collaborative effort, breaking down operational barriers and increasing cybersecurity effectiveness. CISOs as valuable partners The role of the CISO within an organisation is to be a valuable, trusted partner of the c-suite. This means CISOs are required to definitively demonstrate that the security controls they implement are working as expected, all the time. CISOs have the responsibility of informatively re-

porting the cybersecurity health of a company to the Board of Directors, but this is challenging without data-driven, quantifiable insights into what is and is not working in their defence architecture.

A relatively new security team structure is purple teaming, where testing is aligned through a shared view of the threat, and the systems that they are supposed to defend Decision making made through data driven insight is invaluable to a business. Using Breach-and-Attack Simulation platforms, organisations can meet the needs of a mercurial threat landscape by continuously testing and validating their security controls. Validation efforts work like continuous fire drills for an organisation’s defences, emulating adversary behaviour, and ultimately evolving defences to meet the needs of a modern threat landscape, with the aim of creating a comprehensive, resilient cybersecurity strategy. ‘Evidence Based Security’ is now the focus.

Q3 2022 www.datacentrereview.com 19


CYBERSECURITY

The truth about DDoS Ashley Stephenson, CTO, for Corero Network Security, explores the changes in DDoS attack behaviour and recommendations for protection.

D

DoS has an understandable reputation as a blunt instrument. It has a track record as an unsophisticated, brute force weapon that requires only basic computer skills to wield in anger. Today’s teenagers can, and often do, use DDoS to flood gaming websites with malicious traffic and bring the intended victims (or opponents) to their knees. Or so the thinking goes. The truth is that DDoS has been evolving to become more of a surgical instrument of criminal extortion. A scalpel – to carve out a crucial and targeted part of a larger campaign – and with every passing year it gets sharper and is used in increasingly sophisticated ways. This is borne out in Corero’s latest 2021/22 DDoS Threat Intelligence Report, which highlights the realities of DDoS threats, how they’re changing and how security teams need to respond.

20 www.datacentrereview.com Q3 2022

Most DDoS attacks are small and short The reality of DDoS on the Internet today is that most attacks are not atomic bombs, they are precision strikes. The vast majority of attacks are small in comparison to the headline-grabbing incidents. Corero research reports 97% are below 10 Gigabits per second (Gbps) and 81% are under 250,000 packets per second (pps). There are a whole range of potential reasons for these tactics. Attacks are often used as part of a campaign using multiple cyber threat vectors in which DDoS attacks may serve as a force-multiplier or distraction in a larger assault. Another theory suggests, however, that these types of attacks have evolved and become popular because many legacy solutions cannot detect them. Their size or duration allows them to effectively evade or outrun many older-style detection solutions into thinking that they are just normal traffic. This is particularly true when this type of attack is sprayed across many adjacent victims in what is sometimes called a carpet bomb attack. Effectively, a series of easy to launch, smaller attacks spread over a wider target area can be parlayed into a destructive force that does just as much damage as one large scale, and harder to accomplish assault. This annual trend of increased threat has been the reality of DDoS for quite some time. But it is also true that DDoS attack fallout often goes unrecognised or is misidentified as more general network issues


CYBERSECURITY

or connection problems. Organisations need to get to grips with the DDoS issue if they want to protect themselves from these threats – past and present. Open VPN One representative development in the DDoS weapons landscape over the last few years is the rise in the use of OpenVPN reflection attacks. It is one of the more peculiar side effects of the global pandemic in relation to DDoS. As lockdown orders set in around the world in early 2020, many more companies resorted to the use of VPNs to establish secure connections between office networks and home workers. This proved to be an opportunistic gold mine for DDoS attackers. They started using OpenVPN, a popular style of VPN tools, as a DDoS reflection and amplification vector to great effect. According to the report, these types of OpenVPN attacks have risen by 297% since the start of the Covid-19 pandemic.

It is now standard practice to use a combination of longstanding attack vectors, supplemented with a fresh layer of these novel, recently discovered enhancements Attackers are combining new and old vectors New DDoS vectors are constantly appearing. Our data shows that unique DDoS attack vectors are increasing year over year. Some of the most recent vectors include the new TP240 PhoneHome and Hikvision SADP vulnerabilities, both of which can be used to launch damaging DDoS reflection and amplification attacks. DDoS attackers are consistently seizing on these new opportunities. It is now standard practice to use a combination of long-standing attack vectors, supplemented with a fresh layer of these novel, recently discovered enhancements. Awareness of DDoS isn’t enough. Data from our report confirms that DDoS vector awareness alone, is not a sufficient defence. In July 2020, the FBI released an alert disclosing and highlighting four new DDoS attack vectors. Despite that warning and the resulting boost in awareness, the malicious use of those vectors grew throughout 2020 and again we report that they were still significantly active in 2021. The future of DDoS DDoS frequency and peak attack power has grown massively in recent years. In yet another example of the continuing evolution, the advent of the Mirai botnet in the Internet of Things (IoT) environment gives us an insight into how this came to pass. Exploiting a large population of poorly secured IoT devices, Mirai managed to perpetrate some of the largest DDoS attacks on record and cripple popular websites and internet infrastructure and – by some accounts – the internet of the entire country of Liberia.

The key to its success was the viral infection or ‘pwning’ of a significant population of IoT devices. DDoS attackers are continuing to exploit the same techniques. The problematic insecurity of the cheap, numerous and poorly secured IoT device is a green pasture for DDoS attackers who can herd together vast armies of these insecure devices and then instruct them via command and control (aka CnC or C2) networks to simultaneously unleash their flood power against a victim or victims. The IoT keeps growing and its capabilities are forecast to grow even faster. 5G and IoT-based networks will expand the frontier of edge-oriented communications, data collection, and computing. Left unprotected, this array of newly-defined internet access points will constitute a DDoS vulnerable flank, enabling attackers to bypass legacy core DDoS protection mechanisms. It’s difficult to imagine the rapid roll-out of these transformative capabilities for the common good without simultaneously enabling DDoS attackers to do the same unless industry-wide changes are made to enhance the deployment of DDoS protection. Stopping DDoS threats Inflexible solutions cannot keep up with the increasingly complex nature of DDoS. Given that most DDoS attacks are small and short, many legacy protections will not detect them. Likewise many legacy solutions cannot respond fast enough to DDoS attacks – some even require a customer to complain of a problem before they are activated. No single DDoS solutions can offer truly effective protection in isolation. Cloud-based DDoS detection and mitigation services are profoundly useful in diverting very large DDoS attacks to cloud-based scrubbing facilities. However, they cannot operate locally in real time as they typically need to detect attack traffic and then redirect it to the cloud. To put it simply, the attack has to hit first, making some resulting downtime inevitable.

Left unprotected, this array of newly defined internet access points will constitute a DDoS vulnerable flank, enabling attackers to bypass legacy core DDoS protection mechanisms Meanwhile, on-premises or on-network solutions are crucial for locally detecting attacks and stopping attack traffic in real-time before it hits the enterprise applications. However, they could struggle with the sheer size of infrequent but powerful DDoS attacks that enterprises may have to face. Enterprises would be wise to consider a hybrid solution which fuses these two approaches together. Cloud-based protection can be on standby to soak up excessive saturating traffic, while on-premises defences will provide the rapid response to the vast majority of DDoS attacks while also providing valuable time needed to swing excess traffic to the cloud. This combination prevents downtime and provides real-time protection against the DDoS attacks of the present and near-future.

Q3 2022 www.datacentrereview.com 21


CYBERSECURITY

The workplace revolution Tikiri Wanduragala, Senior Consultant at Lenovo, outlines the minefield of security threats that organisations are currently navigating as cybercrime continually evolves. ata centres used to be perceived as fortresses: strongholds that protected businesses from external threats. They controlled everything, managed the movement of data, and access was contained. With the adoption of the cloud over the past decade, this began to evolve as more data was interfaced to more locations. But the mass shift to remote work at the start of the pandemic spurred new and accelerated changes in security thinking and approach towards the data lifecycle. With little warning, organisations abruptly had to secure dispersed workforces at scale, all while maintaining business continuity. Cybercriminals capitalised on this by looking to exploit weak points in companies’ security architecture, with the World Health Organization reporting a fivefold increase in attacks in the first two months alone. In retaliation, businesses have had to adapt. Instead of relying almost solely on traditional data protection methods, organisations have started to assess security measures across their entire ecosystem, from the data centre to the cloud, to devices and employees. By encompassing these factors into one holistic view and ensuring that each element is securely protected, companies are creating a new era of secure, trusted ecosystems that cater for the hybrid world, centred around the individual.

D

The cyberthreat surge Cybercrime has increased in tandem with technological developments, and attackers often have the upper hand. Previously, enterprises had the advantage of better resources. Now, the tech used by criminals is equally as sophisticated, if not more superior in many cases. This means that businesses have a minefield of issues to combat. Firstly, with so many potential threat surfaces, there is no single solution they can turn to. Secondly, there is always a way into a system. And thirdly, the landscape is continually evolving. As a result, it’s vital for organisations to keep up-to-date with the pace of the industry, both from a technological and personal standpoint. Remote work, however, has led to complications. Circumstances

22 www.datacentrereview.com Q3 2022

changed, but as the interface remained the same – with employees moving from an office to a home screen, undertaking the same work – many assumed the equivalent for security too. With more people accessing data from more locations, it’s no longer a linear journey from data centre to office. And with end-users more vulnerable, the importance of security from the employee side is just as important as that in the data centre. Fundamentally, organisations need to realise that there are different threats coming from all angles, and the world is quickly changing. They need complete end-to-end security solutions that begin with design and continue through supply chain, delivery, and the full lifecycle of devices. The importance of security needs to be instilled into every employee, alongside the repercussions of weak practices – not least financial loss, but reputational damage too. The importance of infrastructure Although creating new security protocols may seem like a looming and futile task, continuously met with fresh challenges, organisations do not have to forfeit business output to maintain it. With so many features now built into the technology ecosystem, each element can be sufficiently protected with productivity and hybridisation in mind. But this means that infrastructure needs to adapt to reflect the changes in the workplace. In a hybrid world, the same level of performance requires better infrastructure, as data centres need to support numerous home offices instead of a handful of larger hubs. In addition, it needs to underpin the emerging trend of ‘hypertasking’, which sees numerous tasks undertaken within a limited timeframe with the help of technology. People still need to work effectively and be secure without having to compromise performance. Infrastructure must be kept up-to-date, but similarly, users need to be continually informed about the latest enhancements. Features like encryption and VPNs can slow devices down, impacting productivity in the process. As a result, users lose trust in the effectiveness of these measures and abandon them altogether, unless upgrades are both communicated to and tested by employees. If infrastructure successfully meets and exceeds increased demands, the resulting performance improvements mean that organisations don’t need to compromise security or business output. The past 24 months have been a test in this essence, and infrastructure rose to the challenge successfully. But it also identified gaps to be remedied and areas to be improved, such as the need for greater security measures. Ultimately, the world will not return to pre-pandemic ways of working and living, and infrastructure needs to match up with the hybridisation era. Governments and corporations alike are coming to the realisation that this is vital to support society now and in the future. As reliance on infrastructure increases, so too will security threats, and organisations need to take necessary steps to protect it. Technologies like Trusted Platform Modules (TPMs) are being built into servers to support


CYBERSECURITY

this, acting as physical locks that prevent them from being easily overridden through software hacks. ThinkShield, for example, embraces tech like TPMs, spanning the supply chain as well as mobile, PC and server products. Essentially, the infrastructure acts as phase one of the security ecosystem, as data then moves from the cloud through to the employee.

With more people accessing data from more locations, it’s no longer a linear journey from data centre to office Transforming weakness into strength The security weak link in business is usually the individual, and often inadvertently. Employees tend to be more vulnerable to threats when working alone without nearby colleagues to help identify potential threats, and criminals are exploiting this. In August 2020, a few months into the pandemic, INTERPOL reported that personal phishing attacks made up 59% of cyberthreats in member countries. However, employees can also be a strength. If staff are trained and understand the security landscape well, they can act as the first line of

defence for an organisation. While technology can be compromised, many large cyberattacks can be combatted by human intervention. For instance, an individual could assess whether it is a good idea to join a public WiFi network, or whether they should use a VPN. Some organisations even share phishing simulations with rewards for employees that report the emails or avoid clicking on potentially harmful links. Others have implemented architecture like Zero Trust, which requires people to verify any device, network or user without trusting them by default. Above all, the importance of maintaining optimal security revolves around the brand, the company, and its reputation. But it’s also about the user experience – ensuring that employees themselves feel safe and protected, while being able to work wherever as efficiently and productively as possible. The death of the security fortress? The days of data centres acting as fortresses, being solely responsible for the security of an organisation, are numbered. The hybrid workplace revolution has necessitated new security ecosystems, securing data through its lifecycle from the data centre to the cloud to devices – and, perhaps, most integrally, to employees. If staff are equipped correctly, they can act as individual security strongholds, whether in the office or at home. By breaking down silos and taking a holistic view of security, businesses will be better protected from evolving threats. And with the right infrastructure to support them, employees will step into the hybrid era safely, securely and productively.

Q3 2022 www.datacentrereview.com 23


SPONSORED FEATURE

RiMatrix Next Generation – establishing data centres flexibly, reliably and fast Rittal’s RiMatrix Next Generation (NG) is a groundbreaking new modular system for installing data centres flexibly, reliably and fast. ased on an open-platform architecture, RiMatrix NG means customised solutions, delivering future-proofed IT scenarios, can be implemented anywhere in the world. These include single rack or container solutions, centralised data centres, distributed edge data centres or highly scaled co-location, as well as cloud and hyperscale data centres. RiMatrix NG is the first platform that supports OCP direct current technology in standard environments. Change is a constant across today’s IT infrastructure, but digital transformation is creating innovation at a pace that has never been seen before and the pace will almost certainly continue to accelerate. This requires both rapid responses and long-term investment in data centres which are flexible enough to meet a myriad of new challenges. Rittal has responded with its new RiMatrix Next Generation (NG) IT infrastructure platform. “Right from the initial design phase, we thought ahead in terms of adapting to diverse and constantly evolving requirements when we were developing the open platform,” says Uwe

B

24 www.datacentrereview.com Q3 2022

Scharf, Managing Director Business Units and Marketing at Rittal. “Our customers have to adapt their IT infrastructures to developments faster than ever before to ensure business-relevant products and services can be continually created at the highest possible speed and without faults. “Our aim is to support them as their partner for the future.” The result is a pioneering, open platform for creating data centres of all sizes and scale, flexibly, reliably and fast, and one which supports comprehensive consulting and services throughout the entire IT lifecycle. Whether it’s single rack or container solutions, centralised data centres, distributed edge data centres or highly scaled colocation, cloud and hyperscale data centres, the modularity and backwards compatibility of RiMatrix NG means that it’s possible to update individual components in an infrastructure, so the entire data centre can continually be adapted to meet fast-changing technological developments. “RiMatrix NG thus becomes an IT infrastructure platform that is extremely future-safe and flexible,” Scharf explains.

All IT infrastructure components in a single modular system The RiMatrix NG modules cover five functional areas: racks, climate control, power supply and backup, and finally IT monitoring and security. This enables IT managers to quickly and easily create solutions that are tailored to their individual requirements. The number of potential combinations offered by Rittal and its certified partners (e.g. for energy supplies or fire safety) means that users can both meet their own needs, and any stipulated local regulations, wherever they are based across the world. RiMatrix NG offers users the same flexibility as Rittal’s other modular systems, both through the new VX IT, as well as other, older generation racks. This makes the platform scalable in terms of size, performance, security and fail-safe reliability. If a particularly fast response time is required, or existing buildings do not offer sufficient space, then the data centre can be placed within a container and safely integrated into any existing IT infrastructures.


SPONSORED FEATURE

First platform for OCP technology The RiMatrix NG is the first platform to support the use of OCP components and direct current in standard environments. Highly-standardised, direct current architectures and 21 inch racks in the Open Compute Project (OCP) design are increasingly becoming recognised as the most energy-efficient choice for hyperscale data centres. “Rittal is both a driver of the OCP initiative and a top supplier of OCP racks for hyperscalers worldwide,” Scharf says. “With the RiMatrix NG, we are the first supplier to enable the straightforward use of OCP technology in standard data centres.” Data centre operators can use RiMatrix NG modules and its accessories into an existing,

rapidly changing architecture without switching the entire data centre or changing the uninterruptible power supply (UPS) to direct current. “In this way, we now provide all our customers with easy access to the energy and efficiency benefits of this technology for the future – even for individual applications,” Scharf explains. IT climate control IT systems installed in RiMatrix NG are cooled in a controlled cycle using tailored and fail-safe fan systems, refrigerant-based or water-based solutions, and their performance is continually monitored. The cooling solutions can be tailored to each and every system, from single racks, suite and room climate control, right up to complex high-performance computing (HPC) using direct chip cooling (DCC). IT power supply and backup Rittal’s ‘Continuous Power & Cooling’ concept is a way of bridging short-term power failures to prevent damage to both active IT components and other parts of the infrastructure, including the climate control. It offers protection across the full length of the energy supply, from the main in-feed, UPS systems and sub-distribution, to the smart socket systems (power distribution units) in the IT racks. IT monitoring and safety The RiMatrix NG platform supports monitoring solutions such as the Computer Multi-Control III (CMC III) monitoring system and Data Centre Infrastructure Management (DCIM) software. This includes various sensor options measuring humidity, temperature, differential

pressure as well as vandalism. Users can also choose from a range of protective measures depending on their needs, for example, a basic protection room within a modular room-in-room solution, or a high availability room for even greater reliability. The platform’s safety is certified under ECB-S rules from the European Certification Body GmbH (ECB). Rapid project implementation RiMatrix NG was designed in such a way that new data centres can be rapidly installed. Components can be quickly and easily laid out using the web-based Rittal Configuration System, and there is also Rittal’s unique 24/48-hour delivery window for standard products in Europe. The platform’s international ratings not only ensure its reliability, they further speed up IT project installations because they eliminate the need for time-consuming permit and test procedures. Consulting throughout the entire IT lifecycle In addition to the system components, Rittal’s customers are given all the support they need for set-up and operation. And this support continues across the entire IT lifecycle of a data centre. The company’s service portfolio includes design consultation, planning and configuration, as well as assistance with operations and optimisation. Flexible financing models, including leasing, round-off its portfolio and enable needs-oriented investment. For more information, visit: www.rittal.com/rimatrix-ng

Q3 2022 www.datacentrereview.com 25


CYBERSECURITY

What’s IT worth

26 www.datacentrereview.com Q3 2022


CYBERSECURITY

Mehul Revankar, Head of Vulnerability Management at Qualys, explains the importance of applying a risk management approach to security.

anaging cyber risk is one of the biggest challenges that today’s organisations face. Keeping all IT assets protected is a massive responsibility – everything from the devices in the office and servers in data centres used daily, to infrastructure and cloud services. It’s no secret that the number and impact of attacks are rising, with Verizon’s recent Data Breach Investigation Report unveiling nearly 24,000 attacks took place during 2021 alone. The rise in ransomware attacks last year was more than the rise that took place over the last five years combined. The report also highlighted that issues arising from software supply chains led to a huge number of breaches, in addition to misconfigurations in modern applications and cloud services that added to security problems. In order to manage this influx of threats, security teams must understand how to prioritise the largest risks to their company environments. But the sheer number of assets that enterprises have make it difficult, if not impossible, to patch everything immediately. Teams need to take a different approach.

M

Looking at your risk profile According to Qualys research, there are 185,446 known vulnerabilities that exist based on data from the National Vulnerability Database. These range from issues in niche or older software products that are no longer supported, to critical problems that affect huge swathes of IT assets. The challenge is to know what assets and software you have installed, whether any of those assets have problems that need to be fixed and whether those issues can be exploited. While there are thousands of vulnerabilities that exist, they are not all equal in terms of risk. Of the total number of vulnerabilities, 29% (55,186) have potential exploits available – that is, where code has been created to demonstrate how a flaw works. Beyond this, only 2% of issues will have weaponised exploits against them, which enable malicious attackers to quickly exploit vulnerabilities with minimal effort. This equates to around 3,854 vulnerabilities. Only 0.4% of vulnerabilities actually have working exploits that have successfully been used for attacks by malware families or threat actor groups. In other words, less than 2% of all software vulnerabilities are responsible for the vast majority of malware attacks and security breaches that today’s enterprises face. In identifying which issues pose the largest threats, you can effectively improve how you manage your security and where you put your efforts. In understanding which issues affect your specific infrastructure, you can see which should be higher up the priority list for patch deployment and you are able to evaluate other security issues to see if they

need additional attention. All organisations have different implementations in place. In practice, this can mean that a software vulnerability that is rated as less severe for the majority of users is actually critical for you to fix immediately. In these circumstances, knowing your own risk profile will enable your organisation to increase their security posture. How to improve your approach The security industry is continuously looking to improve management processes and prevent attacks. For example, the US Government has enforced new roles over federal government IT projects that mandate software bill of materials, or SBOMs. These documents aim to capture all the software elements and services that make up an application, including versions and updates that are in place. This is a crucial step because if you know all the components that make up custom applications, you can get more oversight of what issues they have. This improves how organisations get a baseline understanding of their environment. You cannot secure what you do not know is there. Understanding your environment, tracking your software supply chain and having accurate and up-to-date IT asset lists in place are all necessary. From there, you can carry out regular scanning to track all assets that are deployed on your network.

Less than 2% of all software vulnerabilities are responsible for the vast majority of malware attacks and security breaches that today’s enterprises face To be a true risk to your enterprise, a specific vulnerability must have material applicability in your specific environment. Controlling cybersecurity risk is much more achievable by focusing security and IT teams on the vulnerabilities that matter most to your company’s exposure. Once you know what you have in place and what to prioritise, you can then set out your remediation plans. Streamlining your workflow between the security team responsible for detecting vulnerabilities and the IT operations team responsible for deploying patches can improve your mean time to remediation (MTTR). These two teams often do not integrate. Automating workflow creation can make this process smoother and easier for both sides, while simultaneously improving security hygiene. For risks that are lower priority, teams can automate patch deployments and fix those problems without employing manual labour. By looking at your whole approach to IT – from custom applications and software supply chains through to cloud services and endpoints – you can get a much clearer picture of what needs to be done in order to maintain security at scale.

Q3 2022 www.datacentrereview.com 27


CRITICAL INSIGHT

VIRTUAL EVENT 22-23 NOVEMBER 2022

Get ‘Critical Insight’ into the data centre sector The data centre sector is at a pivotal stage in its evolution. According to some estimates, by 2025, we could be generating more than 463 exabytes of data globally, with data centres reportedly already accounting for around 1% of global electricity use. Amidst an unprecedented backdrop of rapid growth in demand for compute, we are faced with increasing scrutiny on consumption of power and environmental impact – while struggling with an ongoing skills shortage. Remote and hybrid working have introduced new challenges, as the industry grapples with the post-pandemic impact on things like cybersecurity and connectivity. The next few years are critical for the digital infrastructure industry. Critical Insight, a brand-new event for the data centre sector, will explore all of these topics and more. Set to take place on 22 and 23 November 2022, the two-day virtual event will provide a platform for key industry leaders to discuss their expert insight and opinion on the crucial issues impacting the changing landscape of the sector. Brought to you in association with Data Centre Review, the inaugural event will provide an opportunity for attendees and a diverse line-up of experts to get to grips with new regulations, new innovations and be a part of the conversation that will help to shape the future of the industry. You can view the preliminary agenda and register for the event by visiting critical-insight.co.uk If you’d like to find out more about sponsorship or speaking opportunities, please get in touch with Sunny or Kelly for more information: Sunny Nehru: sunnyn@sjpbusinessmedia.com / +44 (0) 207 062 2539 Kelly Baker: kellyb@electricalreview.co.uk / +44 (0)207 0622534

www.critical-insight.co.uk

28 www.datacentrereview.com Q3 2022


CRITICAL INSIGHT

Sessions planned for the first Critical Insight will include everything from data centre design and build, UPS and backup power to the cloud and cybersecurity. Here’s the first look at our agenda for 2022.

Day 1

Day 2

09:00 – Welcome

09:00 – Welcome Back

Introduction to Critical Insight from Kayleigh Hutchins, Editor for Data Centre Review.

Kayleigh Hutchins, Editor of Data Centre Review, welcomes everyone back for day two of Critical Insight.

09:05 – Keynote: The Data Centre in 2022 & Beyond

09:05 – Keynote: The UK’s Digital Strategy

Post pandemic, in a world where demand for compute is increasing at pace, what challenges face the sector and how can we meet them head on?

09:35 – Colocation The needs of colo customers are evolving – is colocation up to the challenge?

10:05 – Panel: The Sustainable Data Centre Are we really serious about creating a more sustainable future – and what are we doing in practice?

10:50 – UPS & Back-up Power How grid-interactive UPS systems can play a part in stabilising renewable energy supply and help mitigate the insatiable power consumption of the data centre.

11:35 – Networks What is the role of network connectivity in digital transformation?

12:05 – Tech Skills It’s an issue that has plagued the sector for years, but are we doing enough to tackle the data centre skills shortage?

12:35 – Panel: Can the Data Centre be Smarter?

The UK government is taking steps to make the UK a science and tech superpower, and recently called for a consultation on cloud and data centre security. What role will the data centre sector play in this ambitious plan, and what regulation is headed our way?

09:35 – Telecoms & 5G What does the dawn of 5G mean for the data centre sector?

10:05 – Panel: Designing Data Centres With supply chain issues and new sustainability requirements at the top of the agenda, how are we changing the way we design new builds – and what are we doing to modernise legacy data centres?

10:50 – Powering the Sector With the climate crisis at the forefront of our minds, is there a more sustainable way to power the data centre?

11:35 – Cooling Cooling is a major speedbump in the data centre’s journey to reduce its carbon footprint. What cooling innovations will be key to the next step in the data centre’s evolution?

12:05 – AI & Machine Learning With AI technology rapidly evolving, how can it be utilised to make data centres more efficient?

As tech becomes ever more sophisticated in responding to the needs of our changing society, the world is getting smarter – is the data centre keeping pace?

12:35 – DCIM

13:20 – Cybersecurity

13:05 – Panel: Heads in the Cloud – Changing Trends

How we work with and use data is rapidly evolving – what are the new cyber threats we need to be aware of, and how can we mitigate them?

How can DCIM solutions make use of AI and real-time analytics to address current issues like rising energy costs, security and the pursuit of sustainability?

From private and public, to hybrid and multi – how is the cloud adapting to new demands, regulation and a rapid increase of demand for capacity?

Q3 2022 www.datacentrereview.com 29


UPS & STANDBY POWER

Are we ready for change? Could locked-in, longterm UPS maintenance contracts be beneficial for data centres? Louis McGarry, Sales & Marketing Director at Centiel UK, explores the pros and cons.

30 www.datacentrereview.com Q3 2022

e seem to be living in a world of uncertainty. Locking-in to fixed-term contracts can offer a stable solution which works for both parties. Mortgages are a great example where guaranteed fixed-rates, avoid the uncertainty of tomorrow. Locking-in to contracts has advantages. Data centres have the flexibility to offer services on a short-term basis, but long-term agreements mean they can offer commercial rewards as rates can be fixed from day one. Clients and the data centre can also plan and manage costs more effectively. Surely this creates a win-win situation for all? Interestingly, although locking-in appears to be the new normal when it comes to mortgages and data centre contracts, it is generally not the case when purchasing and maintaining a UPS system. However, perhaps it could or should be? With ever-rising costs, now is the perfect time to discuss the benefits of fixed-rate agreements that ensure the integrity of UPS systems from start to finish to avoid unexpected costs.

W


UPS & STANDBY POWER

OpEX An objection to locking-in to a UPS maintenance contract might be that, “We can’t commit our OpEX budget for 10 years, we only plan a few years ahead.” I’d counter that argument with: “A UPS is not a disposable item. The initial purchase already enters the data centre into a 10-15 year relationship with the equipment and manufacturer. This means data centres must be prepared to invest in the maintenance and remedials during that time anyway. As soon as the CapEx is committed, so is the OpEX.” Therefore, a formal long-term maintenance contract would actually be beneficial as it will reduce costs and ensure the system is always kept in optimal condition No hidden costs The ideal scenario would be for a data centre to enter into an agreement with a UPS manufacturer to purchase a solution and maintain it over the next, say, 10 years. This long-term contract would include everything – remedials, caps replacements, preventative maintenance, all parts and labour etc., and the cost would be spread over the time period. This means a flat rate would be due regularly. The manufacturer would simply arrange any necessary replacements at the agreed time and there would be no unexpected costs for the data centre to worry about. With the recent strain on the global supply chain, guaranteeing the price of parts and labour for several years to come could be a very wise move for data centres. Although the thought of a long-term commitment can put off some organisations, the assurance that all remedials are prepaid and completed when necessary should outweigh any concerns. And regular remedial works are essential to guarantee the full working life of a UPS system. For example, a capacitor failure can be catastrophic, resulting in loss of load and damage to expensive equipment. It’s easy to see how costs would escalate following the replacement of a damaged UPS due to poor maintenance. A long-term contract would ensure caps are replaced at the appropriate time with no additional cost, for full peace of mind. Short-termism Although it may be in the best interest for a data centre to lock-in to a long-term UPS maintenance contract, it won’t necessarily be suitable for a facilities management contractor, as they may only be responsible for the installation and short-term management of equipment. It is rare to see FMs still involved five or 10 years later. On the one hand, conducting regular tenders for FM contracts can ensure charges remain competitive, but on the other, it can result in a change of FM every couple of years. Although costs are reduced in the short-term, this approach can have adverse effects on the effectiveness and overall cost of maintenance agreements, meaning data centres actually lose out on Total Cost of Ownership (TCO) savings. So, how can FMs and UPS providers work together to ensure that they are doing the right thing for the data centre? A conversation upfront with the data centre who owns the equipment is important, to see how they’d prefer to work. One viable solution could be that the long-term UPS maintenance contracts are handed over and accepted by the new FM at the same price. This could become a Standard, much like the approach to transferring the FM workforce when they TUPE (Transfer of Undertakings Protection of Employment) across, and they agree to honour the terms of the contract.

To achieve this result requires some joined-up thinking. Why not invite manufacturers into the discussion earlier to pool knowledge resources, ideas and come up with workable options which will save client costs over the long term? Working with manufacturers I also believe the scenario of locked-in UPS maintenance contracts will only be beneficial for all parties if clients work directly with trusted manufacturers. Manufacturers can guarantee to support and maintain a product at least 10 years after the last one rolls off the production line. Manufacturer trained and approved engineers will also have the relevant expertise, access to technical support and firmware updates, or spare parts, to prevent putting the load at risk. Resellers may not be able to guarantee these terms.

With the recent strain on the global supply chain, guaranteeing the price of parts and labour for several years to come could be a very wise move for data centres A refreshing approach What if there was a world where clients no longer had to rip out their systems and replace infrastructure when going through a product lifecycle refresh? What if manufacturers could guarantee a complete system refresh after, say, every seven years, when the first capacitor replacement is due? I believe modern modular UPS now offer this opportunity. Data centres are able to select flexible, scalable systems, with a reduced footprint that can be adapted to most, if not all, applications. A further benefit of true modular UPS systems is that they have the facility for components to be removed and replaced seamlessly (zero downtime), which could form part of a service exchange programme. The ageing UPS modules would then be reconditioned ready for another service exchange opportunity. Clients would then be ‘good to go’ for a further seven years. It leads me to ask the question: with UPS solutions reaching the maximum efficiency levels with really <3% left to improve on, why shouldn’t we try to extend the life of the current UPS system? Could manufacturers agree to a refresh programme, with all purchase, installation, replacement, and maintenance costs amortised over time to ensure there are no hidden surprises, no big bills and with everyone knowing the costs in advance? The life of the UPS system could essentially be doubled with reconditioned modules being replaced at the appropriate times. There would be no need for a big system replacement in the future as newer modules replace old, all certified with the latest software and firmware. This whole topic is certainly one for further discussion. I believe this industry is moving towards locked-in contracts for the benefit of clients and to think everything will remain ‘how it’s always been’ is a mistake. The question is, as an industry, are we ready for change?

Q3 2022 www.datacentrereview.com 31


UPS & STANDBY POWER

Mitigating the risks Alastair Morris, Chief Commercial Officer at Powerstar, considers the ‘energy trilemma’ and the necessity for a forward-thinking energy management strategy. n uninterruptible power supply (UPS) is business-critical for data centres – a sector where even the slightest fluctuation in, or loss of, power can have serious operational, financial and reputational consequences. The issues are even more pressing in the current geopolitical climate, as we face the full impacts of an energy trilemma: balancing the competing demands of affordable, sustainable and secure power. While the UK is in the top five, globally, on the World Energy Trilemma Index, the recent report flags up serious issues relating to the UK’s position for energy security and equity – reflecting concerns about the lack of energy storage capacity and the high cost of electricity. Clearly, this is a global problem, but there are obvious signs from the UK government that the trilemma is a pressing problem for the country. In late May, Business and Energy Secretary, Kwasi Kwarteng, wrote to the National Grid ESO, requesting that they explore options to boost energy security for this winter. There is no easy solution to the current energy crisis. Cost and security of supply will be volatile for the foreseeable future. This is more complex than a quick switch to renewables, with a need for careful balancing – and the nature of the trilemma means there are competing agendas at play. Waiting for a stable and affordable energy supply from the Grid, one that meets the net zero demands of current legislation, is a risky option. Perhaps now, more than ever before, the climate demands that businesses – and data centres are a clear case in point – mitigate against geopolitical and financial risks by taking control of their energy supply.

A

New solutions As power disruption is one of the most significant risks to data centre operations, the sector has traditionally relied upon uninterruptible power supply (UPS) to protect critical equipment and prevent the loss of vital data in the event of disruption. These older UPS systems usually rely on lead-acid battery technology which, while covering the basic and critical requirement of preventing disruption to supply from affecting sensitive equipment, have significant drawbacks – impacting on affordability, sustainability, as well as being potentially insecure. In the context of the energy trilemma, a traditional UPS looks increasingly untenable as a long-term solution. Modern UPS technology involves the combination of a control system and sophisticated battery energy storage (BESS). This offers ultra-fast switching to connect the battery to the site supply in less than 10ms,

32 www.datacentrereview.com Q3 2022

protecting all equipment on site in the case of any power disruption. Critically, and additionally, power management software offers the capability to forecast and manage multiple power loads, as well as any on-site power generation to help achieve the cheapest, most sustainable, and stable electrical supply. With energy costs spiralling, the storage element inherent in a BESS solution can offer significant benefits when factored into data centre energy management infrastructure – helping to address the affordability arm of the energy trilemma. Energy can be purchased when prices are at their lowest and retained, to be used when needed. Renewable firming offers the built-in capacity to store energy generated on-site, which can then be stored and used when needed or sold back to the Grid. It is this opportunity for a new revenue stream that can be most compelling.

Perhaps now, more than ever before, the climate demands that businesses – and data centres are a clear case in point – mitigate against geopolitical and financial risks by taking control of their energy supply A BESS solution allows businesses to capitalise on the National Grid’s Demand Side Response (DSR) opportunities. Balancing services are used by the Grid to even out demand across the UK’s power supply and, as inflexible renewables are increasingly important within the country’s energy mix, balancing services show continued growth. Companies who have the storage capacity and the technology to engage in balancing services receive a guaranteed income if they can be flexible. This is one of the greatest assets of a BESS, which can rapidly draw down electricity from the Grid or release it back again, to be available at times of peak demand, meaning that the company can fulfil their contractual obligations and introduce a new revenue stream – a far more positive option than the sunk costs of older UPS. A balancing act The issue of affordability is intertwined with that of sustainability, where traditional UPS solutions are concerned. By its very nature, a typical UPS consumes significant amounts of energy, as it constantly switches between AC and DC, remaining powered up on standby,


UPS & STANDBY POWER

with losses of between 10 and 15%. This compares to a BESS and smart microgrid solution which has typical losses of around 1%. For a 1MW UPS system, that means wasted consumption of about £200,000 of electricity each year. Compared to a BESS, which will consume approximately 95% less power, the cost savings are reflected in the reduction in carbon emissions. The reputational impact is evident, worldwide, as highlighted by the United Nations Sustainable Development Goal Strategy, which identifies net zero data centres as a cornerstone of clean energy infrastructure. Customers and the supply chain are ever more aware of the need to act on legally binding targets for net zero and are making business decisions based on sustainability. Where data centres’ key client bases are increasingly focused on their own net zero strategies – with the NHS, public sector bodies, the financial sector and large corporates as obvious examples – being able to demonstrate clear steps towards decarbonisation in your own company will be reputationally critical. Maintaining supply The third arm of the energy trilemma – security of supply – is most pressing and critical for data centres. As with the reputational issues highlighted above, any loss of data due to power disruption carries with it significant reputational and financial risk. A study by the British Chamber of Commerce found that 93% of businesses suffering data loss for more than 10 days file for bankruptcy within 12 months, while another recent report found that data loss costs UK companies an average of £71 for each individual record lost. While a traditional UPS generally offers sufficient protection for indi-

vidual pieces of equipment, there is no way to monitor the state of charge of a battery, meaning that without proper maintenance – an additional cost – there will always be some risk of failure. Factoring this issue of security of supply into data centre energy management makes the case for embracing new technology ever more compelling.

A BESS solution allows businesses to capitalise on the National Grid’s Demand Side Response (DSR) opportunities Future-proofing is critical, to provide the resilience the data centre sector needs, as the world becomes increasingly digitalised, and as the energy trilemma becomes more pressing. Fortunately, for forward-thinking companies, there is technology to help address each of the issues. Modern UPS and smart microgrid solutions can offer site-wide protection, can help investment in on-site power generation to achieve a better return, and can even provide new revenue streams. Investment in behind-the-meter generation and energy storage helps reduce the impact of Grid fluctuations, making companies less vulnerable while bolstering brand reputation, demonstrating a clear commitment to working towards net zero.

Q3 2022 www.datacentrereview.com 33


UPS & STANDBY POWER

Evolving sustainable gensets Pierre-Adrien Bel, Product Manager at Kohler, looks at the different options available to help reduce genset emissions.

espite increases in efficiency, the power consumption of data centres is continuing its ever-upward path – and is estimated to be responsible for around 1% of worldwide electricity demand, both for powering servers and to cool them. In particular, the surge in demand for cloud services is driving the growth of large, ‘hyperscale’ data centres. Regardless of their size, all data centres need to ensure a continuous electrical supply, to avoid data losses and outages in service. The diesel generator, or genset, has been the primary equipment used to fill any short-term emergency gaps in grid power – but there is increasing pressure to reduce genset emissions, or switch to alternative technologies.

D

The landscape for data centres Legislation to tackle climate change is making a big impact on data centres, and the industry is responding with self-regulation initiatives. For example, the Climate Neutral Data Centre Pact, where a group of cloud providers and data centre operators has submitted a proposal to the European Union to make data centres in Europe climate neutral by 2030. As well as greenhouse gases (GHG), there is also a drive to reduce other emissions that can be detrimental to health, such as nitrogen dioxide. Hyperscale data centres need to deliver on the sustainability commitments made by their owners, and the companies that use them. Between them, Amazon, Google, Microsoft, Facebook and Apple use more than 45 TWh of electricity per year. These big five firms, known informally as GAFAM, have made strong public commitments on reducing their emissions, and need to hit these targets to avoid reputational damage and problems with legislators. At the same time, they are not willing to compromise on cost, performance, security or reliability in the drive towards net zero. For example, Microsoft has committed to being ‘carbon negative’ by 2030, removing more carbon dioxide from the air than it emits. Similarly, Google has promised to power its data centres with carbon-free electricity by the same year. Amazon, Facebook and Apple all have broadly similar climate goals. These trends mean that more action on sustainability is needed from genset manufacturers.

34 www.datacentrereview.com Q3 2022

The role of diesel gensets Diesel generators are a proven solution for data centres, with low operating costs, high energy density, and long-term reliability. They are easy to deploy and operate, and provide a highly available source of power. However, diesel gensets have one major disadvantage: their environmental performance. While gensets are only used infrequently, the data centre industry expects them to move towards cleaner operation and lower emissions. New legislation, more taxes and tougher regulation are likely to make diesel steadily less attractive; for example the UK has recently prohibited the use of low tax ‘red diesel’ in data centre generators. Singapore has gone even further, imposing a moratorium on new data centres in 2019 partly due to environmental concerns, which has only recently been lifted. Microsoft estimates that diesel contributes less than 1% of its overall emissions, but it is still pushing to end its use of diesel gensets for emergency power by 2030. It is exploring options including biogas, natural gas, and hydrogen, but it’s not yet clear if it has any solutions that can scale sufficiently to replace diesel. Another option, followed by Facebook and other large players, is to locate data centres where the local power grid is proven to be robust, so there will be fewer outages to cover. Optimising gensets Diesel technology is constantly evolving. Genset vendors have invested massively in reducing emissions, and made substantial improvements. They are also working towards digitalisation, using the Internet of Things (IoT) and remote monitoring for improved diagnostics. In-cylinder technologies reduce the pollutants emitted by the diesel engine. For example, shifting from mechanical fuel systems to electronic fuel injection can significantly reduce the creation of pollutants. Computer-aided engineering tools and computational fluid dynamics have also enabled the modelling of engine behaviour to become more sophisticated. Together, these and other in-cylinder developments, such as exhaust gas recirculation, have allowed optimisation of the system to improve fuel consumption and reduce emissions. Another major area of improvement has been the reduction in ‘wet stacking’. This is when unburned fuel builds up in the engine’s exhaust


UPS & STANDBY POWER

Genset vendors have invested massively in reducing emissions, and made substantial improvements

system, leading to excessive wear and damage. This is usually addressed by burning off unused fuel every month by running generators at 30% of their rated capacity for at least 30 minutes – but this is an expensive waste of diesel and results in higher emissions, particularly for large data centres with multiple generators. Kohler has addressed wet stacking with highly optimised engine design, enabling operators to run their monthly generator tests with no load, only running a load test annually. This has a big impact, and can reduce overall generator emissions by up to 85% (see Figure 1). As well as the engine itself, after-treatment systems are used to reduce emissions. These systems include diesel oxidation catalysts, diesel particle filters and selective catalytic reduction, to capture the different gases and particulates present in the engine’s output. New after-treatment technologies are continually improving the performance of these systems.

Figure 1: Loaded vs. no-load monthly exercise pollutant creation

The evolution of future technologies Looking beyond diesel, biofuels can provide another way to reduce GHG emissions, by using renewable fuels where carbon has been captured from the atmosphere by plants, rather than fossil fuels. An attractive option is Hydrotreated Vegetable Oil (HVO), also known as renewable diesel. This is produced from waste and residual fat from the food industry, as well as from non-food grade vegetable oils. HVO overcomes some of the problems typically associated with biofuels, such as instability and ageing when stored over long periods of time. HVO can be used as a direct replacement for regular diesel without engine modifications, reducing carbon emissions by up to 90%, and can also be mixed with diesel in any proportion. Looking further ahead, lithium-ion batteries and fuel cells are two new technologies that are being widely considered for data centres, with many operators running trials. Fuel cells run on hydrogen, with water as the only waste product. If the hydrogen is made using renewable power, so-called ‘green hydrogen’, then they provide a zero-emission emergency power solution. However, compared to diesel generators, batteries and fuel cells have disadvantages in terms of scalability and cost. For example, batteries that are big enough to use for back-up power are very expensive, and can have a significant environmental impact in their production and disposal. Fuel cells would require substantial investment in infrastructure to transport and store enough hydrogen; for instance, 100 tons of hydrogen would be required to power 30MW of IT equipment for 48 hours. In the long-term these technologies hold great promise, but widespread deployment is likely to be at least 10 years in the future. With the climate crisis becoming a major challenge for us all, genset manufacturers have a responsibility to improve environmental performance. This includes optimising their existing diesel technology, and developing new solutions that can bridge the gap to fully renewable power. Diesel gensets are going to be around for many years, and there is a lot we can do in that time to reduce their environmental impact.

Q3 2022 www.datacentrereview.com 35


NETWORKING

Network first Alan Stewart Brown, VP of EMEA at Opengear, explains why the network needs to be at the heart of digital transformation.

36 www.datacentrereview.com Q3 2022

D

igital transformation is the process of making use of new technologies, across all areas of an organisation, to optimise operations and drive growth by delivering enhanced value to customers. Best thought of as a continual adaptation to a constantly changing environment, it’s a journey that strives towards optimisation across processes, divisions and the business ecosystem in general. Enterprises today are once again putting this approach at the centre of their organisational agendas. Fast-evolving customer expectations, escalating market pressures and overall organisational goals have fuelled digital transformation in every industry. It is estimated that by 2025, global spending for transformational efforts will reach $2.8 trillion. If this process is to be successful, it is important that networks and networking sit at its very heart. Why is this? Well, digital transformation helps to forge links between organisations, their technology and customers. These new systems will, in turn, allow businesses to build a bridge to the future by establishing new networks and ecosystems that will result in new business models to achieve future growth. This new digital world will revolve around data, actionable intelligence and, most importantly, connectivity.


NETWORKING

A key connection Network connectivity is critical throughout digital transformation. Any type of network disruption will render the applications that rely on transformation useless, affecting the organisation and its customers. During a digital transformation roll out, it’s critical to have always-on network access. The continuous addition of new applications places a strain on the network, increasing the likelihood of an outage. How to access critical applications when a disruption occurs should be one of the main considerations during the planning phase of digital transformation and it is certainly one of the major reasons why the network needs to be core to this planning. When it comes to network outages, it’s never a matter of if it will occur, it’s more a matter of when and how long it’ll take to recover. A focus on network resilience backed up by a NetOps approach founded on network automation and technology that delivers Smart Out-of-Band and Failover to Cellular capability offers a way forward here. This combination of technology and processes helps to ensure that organisations have the connectivity needed throughout their digital transformation journey. By providing access to network devices through an independent management plan, the IT team always has a full view of the network remotely, and will always be able to access infrastructure at every site.

To ensure that they embed network into their process of digitalisation, it is important that organisations make certain that network engineers are having a say in how transformation is carried out Mitigating the security threat A focus on networking throughout the digital transformation process will also help organisations enhance their security. Network engineers and CIOs agree that cybersecurity issues represent the biggest risk for organisations that fail to put networks at the heart of digital transformation plans. According to research commissioned by Opengear, 53% of network engineers and 52% of CIOs polled across the US, UK, France, Germany and Australia rank cybersecurity among the list of their biggest risks. The concerns are fueled by an escalating number of cyberattacks. In fact, 61% of CIOs report an increase in cybersecurity attacks/breaches from 2020-21 compared to the preceding two years. For the digital transformation of networking, 70% of network engineers say security is the most important focus area, and 31% say network security is their biggest networking priority. CIOs also understand the importance of the issues. More than half (51%) of network engineers say their CIOs have consulted them on investments to deliver digital transformation plans, the highest priority in the survey. What’s more, 41% of CIOs rank cybersecurity among their

organisation’s most important investment priorities over the next year, with 35% stating it is among the biggest over the next five years. In both cases, cybersecurity ranks higher than any other factor. Through the pandemic, we have seen the importance of cybersecurity skyrocket for businesses as employees switch to working remotely and cyber-criminals ramp up their activity. Forward-thinking businesses understand these challenges and the importance of investing more in security and ensuring it is woven more closely into the fabric of their networks and digital transformation efforts. Coming together to deliver networking transformation The above points help to clearly illustrate the reasons why networks and networking are critical to any digital transformation – and also help explain how they can help facilitate that transformation for any organisation today. But there are nevertheless a host of potential pitfalls along the way. To ensure that they embed networks into their process of digitalisation, it is important that organisations make certain that network engineers have a say in how transformation is carried out. That’s never easy. From the IT aspect, getting CIOs, C-suite decision-makers and network engineers to engage closely is not always simple because the groups face a wide range of differing pain points. CIOs are responsible for creating the overarching vision that drives transformation projects and engineers are tasked with identifying how to successfully execute them. These differing priorities have created significant disconnects which must be addressed to identify any gaps, so that goals on their transformation roadmap can be reached. Collaboration between the two groups has sometimes been challenging. In the Opengear survey, just 12% of network engineers stated having a significant amount of involvement in influencing the strategic focus of their organisation’s digital transformation. This highlights an opportunity for CIOs to involve network engineers more closely in strategic decision-making. As the network team oversees the entire network, the critical foundation for every digital transformation project, having their input has become increasingly important to CIOs. The two groups need to be collaborating more closely together. That’s starting to happen now, with over three-quarters of CIOs (78%) saying they have made more use of networking and IT teams for higher value tasks over the past two years. How engineers are driving network transformation forward Engineers are ‘hands on’ with the network, managing them every day. They understand the importance of driving network automation, leveraging tools like NetOps, Ansible and Python, and how even with modern network advancements, challenges remain in these areas. Leveraging the principles of NetOps, they will be able to provision new systems and infrastructure remotely, monitor and manage the smooth operation of the solution from afar and be able to remediate any problems when they do occur with a virtual hands solution. This is essentially a capability of network engineers – and that is why it is so important that CIOs bring them into the fold and tap into their expertise to ensure digital transformation is networking-driven and there is focus on reducing downtime and bringing costs under control. Get all that right and they will see a successful business transformation, powered by a more secure and resilient network in place, for today and long into the future.

Q3 2022 www.datacentrereview.com 37


The invaluable resource for electrical professionals informing the industry for over 140 years Register now for your free subscription to the print and digital magazines, and our weekly enewsletter

SUBSCRIBE FOR FREE TODAY

www.electricalreview.co.uk/register


INDUSTRY INSIGHT

Industry Insight: The gathering storm Developing and executing an effective sustainability strategy has become a top-tier imperative for operators of digital infrastructure, says Andy Lawrence, Executive Director of Research, at Uptime Institute.

Nearly 15 years ago – in 2008 – Uptime Institute presented a paper titled, The Gathering Storm. The paper was about the inevitability of a big struggle against climate change, and how that might play out for the power-hungry data centre and IT sector. The title echoed Winston Churchill’s book describing the lead up to World War II. The presentation was prescient, discussing the key role of certain technologies such as virtualisation and advanced cooling, an increase in data centre power use, the possibility of power shortages in some cities, and growing legislative and stakeholder pressure to reduce emissions. But seen from today, in 2022, one fact stands out: the storm is still gathering, and, for the most part, most of the industry is as yet unprepared for its intensity and duration. This may be true both literally – a lot of digital infrastructure is at risk, not just from storms, but gradual climate changes – and metaphorically. In the next decade, demands for operators to be ever more sustainable will rain down from legislators, planning authorities, investors, partners, suppliers, customers and the public. And increasingly, many of these will expect to see verified data to support claims of greenness, and for organisations to be held to account for any false or misleading statements. Incoming legislation If this sounds unlikely, then look at two different reporting or legal initiatives, both in fairly advanced stages. The first is the Task Force on Climate-related Financial Disclosures, known as TCFD, a climate reporting initiative created by the international Financial Stability Board. TCFD reporting requirements will soon become part of financial reporting for public companies in the US, UK, Europe, and at least four other jurisdictions in Asia and South America.

Demands for operators to be ever more sustainable will rain down from legislators, planning authorities, investors, partners, suppliers, customers and the public Reports must include all financial risks associated with mitigating and adapting to climate change. In the digital infrastructure area, this will include remediating infrastructure risks (including, for example, protecting against floods, reduced availability of water for cooling, or the need to invest to deal with higher temperatures), risks to the equipment or service providers, and, critically, any potential exposure to financial or legal risks resulting from a failure to meet stated and often ambitious carbon goals. A second initiative is the European Energy Efficiency Directive (EED) recast, set to be passed into European Union law in 2022 and to be enacted by member states by 2024 (for the 2023 reporting year). As currently drafted, this will mandate that all organisations with more than approximately 100kW of IT load in a data centre must report their data centre energy use, data traffic storage, efficiency improvements and other data, and perform and publicly report periodic energy audits. Failure to show improvement may result in fines.

Q3 2022 www.datacentrereview.com 39


INDUSTRY INSIGHT

CHECK OUT OUR FREE WHITEPAPERS

To download the free whitepapers, visit: www.datacentrereview.com/white-paper

40 www.datacentrereview.com Q3 2022


INDUSTRY INSIGHT

While many US states, and of course many other countries, may lag far behind in such reporting, the storm is global, and TCFD- and EEDtype legislation is likely to be spread around the world. Taking sustainability seriously As the data from an Uptime Institute survey shows in the chart above, most owners and operators of data centres and digital infrastructure have some way to go before they are ready to track and report such data – let alone demonstrate the kind of measured improvements that will be needed. The standout number is that only about a third even calculate carbon emissions for reporting purposes. For all these reasons, all organisations have, or should have, a sustainability strategy to achieve continuous, measurable and meaningful improvement in operational efficiency and environmental performance of their digital infrastructure (this includes enterprise data centres, and IT that is in colocation and public cloud data centres). Companies without a sustainability strategy should take immediate action to develop a plan if they are to meet the expectations or requirements of their authorities, as well as their own investors, executives and customers. Developing an effective sustainability strategy is not a simple reporting or box-ticking exercise, nor a market-led flag-waving initiative. It is a detailed, comprehensive playbook that requires executive management commitment and the operational funding, capital and personnel resources necessary to execute the plan. Creating a sustainability strategy For a digital sustainability strategy to be effective, there must be cross-disciplined collaboration, with the data centre facilities (owned, colocation and cloud) and IT operations teams working together, along with other departments, such as procurement, finance and sustainability. Uptime Institute has identified seven areas that a comprehensive

sustainability strategy must address: greenhouse gas emissions; energy use (conservation, efficiency and reduction); renewable energy use; IT equipment efficiency; water use (conservation and efficiency); facility siting and construction; and disposal or recycling of waste and end-oflife equipment. Effective metrics and reporting relating to the above areas are critical. Metrics to track sustainability and key performance indicators must be identified and data collection and analysis systems put in place. Defined projects to improve operational metrics, with sufficient funding, should be planned and undertaken.

Developing an effective sustainability strategy is not a simple reporting or box ticking exercise, nor a market-led flagwaving initiative Many executives and managers have yet to appreciate the technical, organisational and administrative/political challenges that implementing good sustainability strategies will likely entail. Selecting and assessing the viability of technology-based projects is always difficult, and will involve forward-looking calculations of costs, energy and carbon risks. For example, buying renewable energy can be both financially risky and complicated, requiring specialist expertise; and extensive negotiations will be needed with supply chain partners and regulators, to agree on what and how to report. For all operators of digital infrastructure, the first big challenge is to acknowledge that sustainability has now joined resiliency and become a top-tier imperative.

Q3 2022 www.datacentrereview.com 41


PRODUCTS

Unmanaged Ethernet switches reinvented hoenix Contact’s new generation of unmanaged Ethernet switches from the FL Switch 1000 series offer narrower housing, greater port density, and best-in-class automation protocol traffic prioritisation. The first five variants of the refreshed portfolio are exclusively copper ports, with fibre optic models soon to follow later this year. Both gigabit and fast Ethernet models will be included in the initial roll-out, enabling applications of different bandwidths in a variety of industries. With the help of a unique mounting accessory, the FL Switch 1000 series can also be mounted flat on the DIN rail, enabling use in small or flat cabinets with little space. Thanks to energy-efficient Ethernet, these switches will also feature a reduced power consumption, which will lessen heat and the overall footprint of the device. info@phoenixcontact.co.uk • www.phoenixcontact.co.uk

P

Nitrado chooses G-Core Labs to scale its IT infrastructure internationally

G

-Core Labs’ services have allowed Nitrado to scale its computing power, thus enabling the rapid development of its international products, and possibility to open data centres in new regions including Brazil and Japan. The collaboration with G-Core Labs has provided Nitrado with necessary infrastructure and high computing power all over the world. The company needs these resources to develop its game server hosting services and to expand to new regions. This technological partnership has already enabled the company to launch several new locations, including São Paulo, Brazil and Tokyo, Japan. In these regions, G-Core Labs provides Nitrado with hosting, IP transit and logistics services. Nitrado clients use game servers located in secure Tier IV and Tier III data centres all around the world, including such cities as Frankfurt, New York, London, Singapore and Sydney. info@gcore.lu • www.gcorelabs.com

Eaton’s xModular delivers optimised data centre power infrastructure

E

aton has introduced its xModular system, the latest addition to its critical systems portfolio that brings integration and a digital dimension to the design, deployment, and operation of data centre type facilities. As well as optimising the power aspect (grey space), xModular can be configured to provide ample space for the IT compute equipment (white space) too. This space is designed in harmony with the electrical, cooling, controls and security requirements, providing an all-in-one system. A key element of xModular is the inclusion of Eaton EnergyAware UPS. This grid-interactive UPS means that the electrical system of an xModular data centre unit can act as a Distributed Energy Resource (DER) and provide essential services back to the grid operator thus accelerating and de-risking the adoption of renewables. Field services and aftercare ensure the whole life cycle is managed with the client. With over 2,000 service engineers and countless data centre projects delivered globally, clients can expect world-class technology backed by world-class support. www.eaton.com

42 www.datacentrereview.com Q3 2022


PRODUCTS

Uptime Institute approves range of innovative Tier III-ready data centre designs range of innovative Tier III-ready data centre designs have now been fully approved by the Uptime Institute. The team at Cannon Technologies, working in conjunction with several OEM manufacturers, including Centiel UK, have spent more than three years developing a pre-certified set of solutions now available in ratings of: 100kW, 250kW, 500kW and 1MW. Mark Awdas, Engineering Director, Cannon Technologies said, “The first 100kW data centre solution using a 2N design configuration was completed and approved in 2019, and since then we have worked closely with our industry partners to make various Tier III data centre designs available for facilities requiring different sized systems. This award now provides our customers with fact-based evidence that their final build will more easily obtain their final Tier Certification. “We now have four Tier III-ready modular DC designs described as ‘concurrently maintainable’, which ensures that any component can be taken out of service without affecting production.” Centiel’s leading three phase, true modular UPS, CumulusPower, and technical support was chosen to be incorporated into the designs. The technology offers the highest levels of resilience, is flexible, robust and it has been tried and tested in many scenarios. sales@centiel.co.uk • www.centiel.co.uk

A

Automated data centre inventory tracking with custom UHF RFID and NFC labels worldwide ICT company needed to automate how their servers were tracked and managed. With thousands of high value ICT assets in play, the ability to report without error on real-time asset whereabouts proved essential for both commercial success and compliance. In addition, the company was looking for ways to enhance cable maintenance intervention speed and accuracy. Brady Corporation suggested the solution: automated, real-time asset tracking with passive, custom on-metal UHF RFID and NFC labels. Relevant asset locations, time-stamps, and other data are available in real-time at the click of a button. Staff no longer have to manually count assets and can assess a site’s entire ICT inventory in a couple of hours, instead of weeks. The data also enables the company to prevent errors in asset movement through automatic alerts generated via the supporting software. This increases overall efficiency and decreases labour cost. Additionally, compliance with various regulations worldwide is easier when whereabouts on the entire ICT inventory are available almost immediately in a central location. csuk@bradycorp.com • www.brady.co.uk

A

AKSA DCC Generators power Cyberfort Group’s The Bunker Facility

A

t AKSA Power Generation, we are very proud to be the selected manufacturer for the Cyberfort Group’s The Bunker data centre’s power needs. Cyberfort Group exists to provide their clients with peace-of-mind about the security of clients’ data and the compliance of clients’ businesses. AKSA Power Generation became an important part of this data centre, providing 700 kVA generators. Michael Watts, Director of Infrastructure & Technology at Cyberfort Group, said, “These units are taking pride of place, as a key part of our power infrastructure, assisting in securing the continuity of our services, adding layers of resilience to the power infrastructure at our Ash (Kent) facility, The Bunker. These units were built to a custom specification and design that meet our needs and exceed our expectations.” AKSA Power Generation offers 65 models in our DCC product range, with products between 550 – 3,000 kVA acceptable for Tier III and Tier IV standards set by the Uptime Institute. Regardless of the power rate or complexity of your power needs for your data centre investments, we are able to provide a reliable power source. We manufacture and combine all the important components using the industry’s highest level of design and performance control. sales@aksaeurope.com • www.aksaeurope.com

Q3 2022 www.datacentrereview.com 43



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.