DCR June 2020

Page 1

June 2020


AI & Automation

Green IT & Sustainability

How data centres can exploit artificial intelligence to become more efficient, sustainable and robust.

With our increasing internet usage now impacting the environment, we look to ways we can utilise the web more sustainably.



Industry Insight Michael Adams of Datacentre Speak talks industry issues, the holy grail of sustainability and how to attract new talent.


News 04. Editor’s Comment Business is booming.


06. News London techies set to be winners in the post-Covid job market.

10. DCR World Global goings on.

Features 14. AI & Automation Jim Chappell of AVEVA shares how data centres can exploit artificial intelligence to become more efficient, sustainable and robust.

16. Cooling Chris Wellfair of Secure I.T. Environments outlines how the phasing out of R407a gas, with future supply and rising cost issues, may mean now is the time to switch to a liquid cooling system.

18. Colocation & Outsourcing


Since the corona chaos, organisations across the world have reaped the benefits of colocation. Here, Pat Morley of Sungard AS highlights some key colo questions.


22. Green IT & Sustainability With our online consumption at an all-time high, Lars Larsson of Varnish Software explores the environmental impact of our actions and how we can look to utilise the web in a more sustainable way.

26. Virtualisation, Edge & Cloud Focusing on the dull to power the exciting, Jonathan Bridges of Exponential-e takes a closer look at the core, the cloud and the edge and explains why going back to basics is key to futureproofing our nation.


Regulars 34. Industry Insight Michael Adams of Datacentre Speak discusses issues in the industry, the holy grail of sustainability, how we can encourage new talent, and shares with us his biggest pet peeves.


36. Products EcoDataCenter first to deploy chassis-level immersion liquid cooling solution from Iceotope, Schneider Electric and Avnet.

38. Final Say The new decade is well and truly underway, but what does it mean for your data centre network? Steve Brar of Aruba takes a look at what’s to come.



Editor’s Comment What a difference a couple of months makes, eh? It would appear that quite literally everything I discussed in my last editor’s comment either never happened, or is no longer happening. Although it still went ahead for whatever reason, due to the threat of Covid-19, I and many others were unable to attend the Data Centre World event at the London ExCel, which was a major blow for me editorially, as well as for the wider industry. I certainly never expected the ExCel to be acting as a temporary hospital mere weeks later. At the time I did wonder if I was simply erring on the side of caution, or being somewhat overdramatic. But as time went on it quickly became clear that not attending was the safest and most sensible decision we could have made. And then there are our wonderful awards. Our ER & DCR Excellence Awards were scheduled to take place on May 21 and it’s such a shame the party has been pooped, but it is of course entirely necessary. We had initially rescheduled to September 24, however we will still be watching this space, and if things are not safe, we can assure you we will not be pushing ahead. The wellbeing of our attendees remains our priority and we will be continuing to keep a close eye on the situation. The good news is, the data centre industry is absolutely booming. With many people across the globe suddenly working remotely, we have never needed data centres more. The ability to connect and communicate remotely is literally what is keeping us all going through this time, whether it’s working, keeping in touch with friends and family, keeping fit or simply binge watching Netflix, there is absolutely no question the internet is now an essential service. But since we can’t go very far right now, here at Data Centre (and Electrical) Review, we have decided to come to you. Remotely of course. Recently we have launched a new video interview series, wherein our clients can discuss whatever their hearts desire (within reason.) These video interviews are only around 5-10 minutes in length, as I wouldn’t dream of subjecting our audience to my face for any longer than that, and can be done from the comfort of your own home or office. I will be working closely with those involved to come up with a series of questions, I ask, you answer, and we do the rest. If you’d like to get involved, please drop myself and Amanda McCreddie a line, our details are over there on the right. Anyway, I hope everyone is still behaving themselves and not licking handrails with wild abandon or anything of that nature. Just think when all this is over, we can properly enjoy our summer and that first fruity beverage in the beer garden will have never tasted so sweet. Claire Fletcher Editor


Claire Fletcher clairef@datacentrereview.com


Alex Gold alexg@sjpbusinessmedia.com


Sunny Nehru +44 (0) 207 062 2539 sunnyn@sjpbusinessmedia.com


Amanda McCreddie +44 (0) 207 062 2528 Amanda@electricalreview.co.uk


Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: subscriptions@electricalreview.co.uk SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register

Electrical Review is published by

2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media

Average net circulation Jan-Dec 2018 6,501

Follow us on Twitter @DCRmagazine

Join us on LinkedIn

4 www.datacentrereview.com June 2020




The latest highlights from all corners of the tech industry.

A study from Pluralsight has revealed that although technology leaders understand the importance of continuous learning and upskilling, fundamental differences exist that can hinder skills development, and thus, company growth. The study found that although most companies provide opportunities for employees to develop technology skills, a large percentage of programmes don’t meet employee needs.

VMware announces intent to acquire Kubernetes security platform Octarine Keeping remote workers secure during Covid-19 ccording to a poll from Centrify, over 70% of British businesses are using multi-factor authentication (MFA) and a virtual private network (VPN) to manage the security risks posed by the increase of remote working during the Coronavirus pandemic. Despite recognising a need for more cybersecurity vigilance in the face of increased opportunistic threats, the poll also revealed that 43% of individuals believe the increased cybersecurity protocols for remote workers will have a negative impact on workplace productivity. Similarly, almost half of the individuals (49%) preferred to remove extra authentication steps for basic apps and data in the workplace, as they felt it adds unnecessary time to procedures. As a potential alternative or middle ground, 60% of business decision makers support biometric data – such as fingerprint or facial recognition identification factors supported by the FIDO2 specification for passwordless authentication – as a suitable replacement to more time-intensive multi-factor authentication to increase productivity. Furthermore, two-thirds (66%) agree that they would feel more secure using fingerprint or facial recognition ID as opposed to a traditional password.


6 www.datacentrereview.com June 2020

UK cyber resilience lagging Hackers are having more success in the UK than other countries, according to research from Accenture, which has revealed UK organisations need more consistent cyber defences. Almost one fifth of attempted targeted cyberattacks in the UK successfully breach security, compared with just over a tenth as the global average.

VMware has announced that it intends to acquire Octarine, the developer of a security platform for Kubernetes applications which helps simplify DevSecOps, while enabling cloud native environments to be intrinsically secure, from development through runtime. The acquisition of Octarine was announced at Connect 2020, which was held virtually due to the Covid-19 pandemic. At the event, VMware confirmed that the company would be integrating Octarine’s technology into the Carbon Black Cloud, providing new support of security features for containerised applications running in Kubernetes, and enable security capabilities as part of the fabric of the existing IT and DevOps ecosystems.



Over three quarters of global consumers (77%) believe that data held digitally about them should be their own property – not a company’s asset. This is according to the Global Consumer State of Mind Report, produced by Truata. The findings coincide with the two-year anniversary of the introduction of the General Data Protection Regulations (GDPR) in Europe. However, consumers are beginning to question the effectiveness of such regulations, as six in 10 (59%) think most companies don’t believe in the importance of data protection, and simply see it as a tick-box exercise to comply with data protection regulations.

London techies likely to be the winners in the jobs market


ob advertisements from direct employers’ career portals have dived by just over two thirds (67%) in April compared with the beginning

of the year. This is according to new analysis from the Association of Professional Staffing Companies (APSCo), the trade association for the recruitment sector. The APSCo London Covid-19 Vacancy Tracker uses data provided by business intelligence specialist Vacancysoft. According to the figures, there were 12,683 professional vacancies in London with salaries

of £40k or more in January. By April, this had dropped to 4,161. When analysing vacancies by sector it’s clear that technology is proving to be one of the most resilient. While areas like real estate and construction have seen recruitment collapse, tech companies have continued to hire. In fact, during April, technology roles accounted for over a third (35%) of all vacancies in London. The top five SMEs recruiting in the capital in April were all tech-focused across sectors such as AI, fintech and gaming.

A circular data centre approach could slash CO2 and TCO by 25% A new report from ITRenew has revealed a new model to help the global data centre industry unlock the full economic value of data centre hardware while optimising sustainability The report titled, ‘The financial and sustainability case for circularity’ is built from extensive market research, expert interviews, and proprietary data sets derived from collaboration with the world’s leading hyperscalers. ITRenew conducted a full lifecycle analysis for rack-scale open hardware solutions, inclusive of both operational and embodied carbon impact. For the first time, this comprehensive approach makes it possible to accurately calculate the aggregate lifetime-value potential of data centre equipment; and to quantify the financial and environmental impact of business decisions throughout the manufacturing, primary use, re-use and post-use phases of that technology. The result is a detailed circular data centre model that businesses can utilise now to significantly improve upon wasteful and financially disadvantageous deploy-and-dispose behaviour pervasive in data centres today. Conservative estimates show that if this continues and the global IT hardware industry utilises this approach wholesale, there is the potential to cut total cost of ownership (TCO) of data centre hardware by 24% to 31% and slash the greenhouse gas impact of the data centre sector by nearly 25%. 8 www.datacentrereview.com June 2020

Providing solutions tailored to the needs of your business

Levant currently have hired UPS systems installed across a variety of sectors such as; banking, healthcare and data management. Our UPS hire options are tailored to the needs of your business and we can have hire UPS systems installed on site in minimum time. We hold in stock a wide range of UPS hire systems from a variety of high end UPS manufacturers. The systems we have available for UPS hire are fully serviced and periodically tested to ensure they are ready to run at short notice. UPS hire is a growing a market as it affords businesses the option of covering peak times during business such as public holidays or large sporting events without the huge financial commitment of purchasing a brand new UPS system. This gives sites a greater deal of flexbility allowing for adaptable power protection in line with the demands of your business. We will maintain the system and provide a 24-hour call out response for the entire duration of the UPS hire. Get in touch with us to find out about the range of leading-brand, high quality units that we offer for immediate UPS hire.

UPS Hire

Containerised and free-standing UPS systems available to hire with rapid response and UK wide coverage Short term and long term options available with or without batteries Contact us today for more information or a quote

Telephone: 0844 381 4711

Email: info@levantups.co.uk

World Who’s doing what and where they’re doing it – Global news from the data centre world.

Amazon Web Services

Equinix Japan

Italy Amazon Web Services (AWS) has now opened the AWS Europe (Milan) Region. With this launch, AWS now spans 76 Availability Zones within 24 geographic regions around the world, and has announced plans for nine more Availability Zones and three more AWS Regions in Indonesia, Japan, and Spain. The AWS Europe (Milan) Region is the sixth AWS Region in Europe alongside Dublin, Frankfurt, London, Paris, and Stockholm.

Equinix has announced the signing of a greater than US$1.0 billion initial joint venture in the form of a limited liability partnership with GIC, Singapore’s sovereign wealth fund, to develop and operate xScale data centres in Japan. The three initial facilities in the joint venture – one in Osaka and two in Tokyo – will serve the unique core workload deployment needs of a targeted group of hyperscale companies, including the world’s largest cloud service providers.

Developers, startups, and enterprises, as well as government, education, and non-profit organisations can now run their applications and serve end-users from data centres located in Italy, as well as leverage advanced AWS technologies.

Raxio Data Centre Uganda

G-Core Labs Chile G-Core Labs, the international provider of cloud and edge solutions, has expanded its presence in South America by opening a new point of presence of its content delivery network in Santiago, Chile’s capital. The location shows an average response time of 24ms (according to Citrix, the independent analytical system), successfully competing with local and American CDN service providers.

10 www.datacentrereview.com June 2020

Raxio Data Centre has unveiled nine local fibre carriers ahead of launch. Uganda’s digital economy is set to get a major boost with the latest move by Raxio Data Centre, which has opened the market for customers to have more options for redundancy and diversity in a competitive environment. These nine local fibre carriers are: Africell Uganda Limited, Airtel Uganda, Bandwidth and Cloud Services Group (BCS Group), Csquared, Liquid Telecom, MTN Uganda, National Information Technology AuthorityUganda (NITA-U), Roke Telkom and Uganda Telecom Limited (UTL).

G-Core Labs Hong Kong G-Core Labs has expanded the Asian segment of its global network infrastructure, opening its first point of presence in China. The PoP is located in Hong Kong, in the economic centre of the Celestial Empire, offering customers secure dedicated and virtual servers, as well as services for rapid content delivery. G-Core Labs servers are located in a certified Tier III class data centre. The company provides 5 TB of traffic for free for each dedicated server.

Microsoft New Zealand Microsoft has announced plans for a brand-new data centre region in New Zealand, adding to an already impressive roster of regions. Azure customers have access to 60 regions worldwide, making the service available in 140 countries around the world. With the development of this new data centre region, Microsoft aims to fuel new growth that will accelerate digital transformation opportunities across New Zealand. The company will also continue its investments in new solutions that support both New Zealand and Microsoft’s sustainability goals. In addition, Microsoft will be adding support for educational skilling programs to increase future employability opportunities for the people of New Zealand.

June 2020 www.datacentrereview.com 11


Making the right choice In this Q&A, the experts at EnerSys provide us with 10 key factors worth considering when choosing a data centre UPS battery.

ata centre operators naturally want the best class of available technology, including related batteries for their UPS systems. Absolute reliability should be combined with energy efficiency, long operating life and attractive total cost of ownership. Currently, the two mainstream UPS battery technologies are lead-acid and lithium-ion (Li-ion). Each has variants; Thin Plate Pure Lead (TPPL) technology, for example, is an advanced, high-performance lead-acid technology available from EnerSys. This article introduces the strengths and challenges of each of these technologies to potential users. Consultation with a major manufacturer such as EnerSys is then recommended, as they can advise on the optimum balance of performance and economy for an application.


1. Safe and reliable technology Q: How well proven are the most common and widely used lead-acid and Li-ion battery technologies, in terms of safety and reliability? A: Lead-acid, as a long-established energy

12 www.datacentrereview.com June 2020

storage technology, is well proven in data centre applications. Many lead-acid technologies are available, and performance is well-understood to assist battery technology selection. Li-ion batteries with a built-in, associated Battery Management System (BMS) can be considered as a safe technology, used in everyday life for numerous applications, but there is limited Li-ion performance data for data centres. Li-ion requires a BMS as a redundant safety design to prevent deep discharge or overcharging. Precautions, including cell selection and overall battery design, are essential for a safe Li-ion battery solution. However, the BMS itself introduces increased complexity to the total system design, with more components.

3. Fast charge acceptance Q: Does Li-ion technology offer a significant fast charging advantage? A: Li-ion does have high charge acceptance and fast charge capability, but this can also be associated with the need for a larger, and more expensive, charger. Meanwhile, advanced TPPL battery technology reduces data centre vulnerability to multiple mains blackouts, through very short recharge times and time to repeat duty. For example, with 0.4C10 a charging current using fast charge methodology, TPPL can be fully recharged, following a one-minute discharge to 1.6 Vpc, in 2½ hours, and ready to repeat duty in 22 minutes.

2. Capital costs Q: Is capital cost still a factor in deciding between Li-ion and lead-acid? A: Despite historical cost reductions, Li-ion pricing remains a barrier for many users. Price depends on many factors including supplier, quality, purchase volumes and the exact chemistry used.

4. Maintenance requirements Q: How can we compare Li-ion and lead-acid battery maintenance requirements? A: Li-ion maintenance requirements are virtually zero due to built-in self-diagnostics that warn of most problems. However, Valve Regulated Lead-Acid (VRLA) batteries are also low maintenance, with no water topping-up


required. Battery Monitoring Systems can be used to further assist maintenance. TPPL is a high quality, proven battery technology that can provide long service life and low downtime within data centres. 5. Service life Q: How do Li-ion and lead-acid battery technologies compare on service life? A: In UPS applications, calendar life becomes the governing factor as the batteries are mostly operated in stand-by mode with low cyclic duty. Service life depends on the lithium chemistry and technology used, together with the battery’s quality at both cell and system level. Typically, Li-ion cells are claimed to have a have a 15 to 20-year design life at 25°C. Alternatively, batteries with TPPL technology can achieve 12+ years design life, with eight to 10 years in service demonstrated. This compares well with the standard Absorbent Glass Mat (AGM) benchmark of five to six years’ service life in UPS applications. 6. Size and weight Q: Does Li-ion offer size and weight reduction compared with lead-acid? A: Yes, by around 50-70%. However, Liion’s reduced weight is not important in data centres. Its reduced footprint and floorspace

TPPL (Thin Plate Pure Lead) is a high quality, proven battery technology that can provide long service life and low downtime within data centres

can be advantageous though, especially in colocation facilities. Alternatively, TPPL battery technology is available as a high energy density solution designed by EnerSys, which offers advantages over standard lead-acid batteries.

cling or disposal can be costly for end-users. Lead-acid batteries contain lead and other metals, acids, and plastics. On aggregate these are about 95% recyclable, and therefore these battery types have inherent end-of-life value.

7. Transportation restrictions Q: Do the lead-acid and Li-ion battery technologies carry transportation considerations? A: VRLA batteries, including TPPL, are classified as non-spillable and approved as non-hazardous cargo for ground, sea and air transportation. Li-ion, however, is subject to heavier legislative shipping restrictions – Class UN3480 – and can only be transported on dedicated cargo airlines.

9. Changing from lead-acid to Li-ion Q: How easy is it to reconfigure a UPS to use Li-ion instead of lead-acid batteries? A: While both lead-acid and Li-ion batteries typically use constant voltage chargers, their charging characteristics are different. Lead-acid batteries operate on float – the usual practice in a data centre. Li-ion types, however, are not maintained and charged at full state charge. Lead-acid and Li-ion batteries also exhibit differences in charging voltages. Accordingly, changing from lead-acid to Li-ion would incur a change in charging architecture, and associated costs. By contrast, upgrading from standard VRLA batteries to TPPL types will yield significantly improved performance without the need to invest in new charging equipment.

8. Battery recycling Q: What are the recycling possibilities for lead-acid and Li-ion technologies? A: Although Li-ion is 100% recyclable in terms of components and materials used, recy-

10. Summing up Q: Can we summarise the comparative positions of lead-acid and Li-ion technologies? A: Li-ion is of increasing interest to data centre operators, yet its high capital cost remains a barrier. Nevertheless, the TCO analysis could change as autonomy demands change and manufacturing costs decrease. As a well-proven technology, lead-acid will remain popular in UPS installations. Additionally, users looking to employ Li-ion technology can discuss TPPL alternatives with EnerSys. These include the DataSafe HX+ and DataSafe XE battery ranges from EnerSys. In addition, the PowerSafe SBS EON Technology battery range is suitable where cyclic duty is required, such as for grid support and peak shaving applications. To learn more about UPS batteries for data centre environments, please visit: www.enersysdatacentres.com

June 2020 www.datacentrereview.com 13


Is AI the key to going green? Jim Chappell, global head of AI and Advanced Analytics at AVEVA, shares how data centres can exploit artificial intelligence to become more efficient, sustainable and robust.

14 www.datacentrereview.com June 2020


ll companies, large and small, are under increasing pressure from investors and other key stakeholders to minimise the negative impact their operations have on the environment. The data centre industry is no exception. Data centres consume a vast amount of electricity, roughly 3% of all electricity generated on the planet. This accounts for approximately 2% of global greenhouse emissions, which is the same as the aviation sector. This is only set to increase as the world embraces the Fourth Industrial Revolution and the Internet of Things, with an increasing number of objects connected to the internet and relying on data centres. IDC predicts that global data usage will increase from 33 zettabytes in 2018 to 175 zettabytes by 2025 – an increase of 430%. Furthermore, the more powerful the data centre is, the more cooling it requires. These cooling systems also consume a significant amount of energy which has led some operators to move data centres to colder climates or find less energy-intensive cooling techniques. Microsoft has even explored submerging a data centre underwater.


As demand on data centres increases, it is clear the industry needs to design facilities for maximum energy efficiency and minimum environmental impact. The operation of a ‘green’ data centre must also consider the need for its IT technology to use less energy than is needed to cool it. How can AI be harnessed to better sustainability? A data centre, powered by artificial intelligence (AI), is nothing new. For some time now, data centre operators have been aware of the significant operational benefits of deploying AI in their facilities. AI can allow data centres to operate autonomously by automating the routine tasks involved in the maintenance and monitoring of these centres. A common solution is predictive analytics in the form of machine learning. AI can identify anomalies in processes or equipment, which indicate performance issues or a deterioration in an asset’s health, well in advance. Sophisticated AI can even identify the probable root cause of the problem and recommend a course of action to best remedy and optimise a given situation. Issues can be identified and corrected quickly and accurately before they have an impact on operations. AI can also significantly reduce costs by minimising downtime and increasing output. A McKinsey study estimates that predictive maintenance enhanced by AI can reduce overall maintenance costs by as much as 10%, downtime by 20% and reduce inspection costs by 25%. With the price of storage and computing power plummeting, reducing operational and energy costs can turn a break-even data centre into a profitable operation. Indeed, Gartner predicts that data centres that fail to deploy AI effectively will become economically and operationally defunct. Beyond productivity and profitability Discussions around the benefits of using AI in data centres often focus on how it can increase productivity and profitability, but neglect to talk about how AI can make a data centre more sustainable. Using historical data collected from smart sensors, such as data output, temperature and humidity levels, AI can train deep neural networks to optimise the performance of a data centre and make it more energy efficient. Moreover, prognostic AI can forecast future events, such as surges in demand or temperature changes, and adapt the system variables accordingly. This not only prevents the data centre from going beyond its operating constraints, but also ensures it operates as efficiently as possible. Less energy-intensive data centres require less cooling, which in turn reduces total energy usage. AI is already being deployed by some of the biggest players in the industry. Google used DeepMind machine learning in its data centres to directly control the cooling systems, which resulted in a notable 40% reduction in energy consumed. Not only does the implementation of AI constitute a significant cost saving, but more importantly it dramatically reduces harmful emissions and the carbon footprint of companies who rely on data centres. A recent PwC study found that deploying AI across business operations could reduce global greenhouse gas emissions by as much as 4% by 2030 – the equivalent to the combined 2030 annual emissions of Australia, Canada and Japan. Where should data centres start To get maximum value from AI, businesses with deployed data centres should first look at their IT and control infrastructures. If they are

collecting data from their control systems and/or energy management systems, then they are excellent candidates to benefit from AI and reduce overall energy consumption. And as they add additional sensors to their infrastructures, AI can provide increased value and sophistication to achieve an even higher level of efficiency. Often, some of the first benefits gained from the implementation of AI in data centres include detecting equipment that wastes electricity. AI can quickly identify underperforming assets, as well as those with maintenance issues; both of which result in the consumption of excessive power. Across a large data centre, the wasted electricity from these types of assets can quickly add up to a significant cost and a significant impact on the environment. This is fundamental and a recommended first step in AI implementation.

IDC predicts that global data usage will increase from 33 zettabytes in 2018 to 175 zettabytes by 2025 – an increase of 430% Cooling is absolutely essential for data centres, and temperature hot spots can occur when least expected. As these situations worsen, computer equipment can start to fail, strange anomalies appear, and energy is wasted. Further, as equipment heats up, it can become less efficient, requiring more energy to run. This can result in a bit of an efficiency degradation spiral. Since hot spots often slowly get worse over time, they can also go undetected for quite some time, resulting in potentially serious issues. Automated monitoring with AI analytics is a ‘best practices’ method of early detection of hot spots, resulting in increased operational efficiency and overall energy savings. Enhanced AI capabilities are continually developing and evolving to minimise energy consumption, minimise downtime, and maximise efficiency. Some of the newer areas where AI is helping have to do with balancing load across servers as well as within a given server across multiple CPUs in order to minimise overall heat generation and, thus, power consumption. Power distribution at a data centre can also be optimised through AI. Associated anomalies detected through aberrations of multi-variate patterns, as well as an overall analysis of situational awareness related to power delivery is an area that is continuing to evolve. Additionally, sensors are becoming more prolific, providing AI with additional ‘raw material’ to perform further analysis and provide more sophisticated insight so that data centres can continue to expand while using less energy. There is no doubt that sustainability has become a top priority for business executives, investors and governments alike. Data centres are, and will continue to be, an integral part of the data-driven economy we live in; therefore, finding ways to reduce their contribution to carbon emissions is critical. This is where AI can play a major role. By optimising operations and increasing energy efficiency, AI can ensure that data centres become more sustainable as the world continues its ascent towards a greener global economy.

June 2020 www.datacentrereview.com 15


Warming up to liquid cooling

Liquid cooling does not need to revolve around some ‘secret sauce’ ingredient

16 www.datacentrereview.com June 2020


Chris Wellfair, projects director at Secure I.T. Environments, outlines how the phasing out of R407a gas, with future supply and rising cost issues, may mean it is the right time to switch to a liquid cooling system, rather than continuing with the replacement R410a gas.


s any data centre manager knows, more and more is being expected of their installations. Whether it is supporting voice over IP communications, CCTV, providing remote working access for staff, or just the huge amount of storage capacity that companies need to support the never-ending ability of users and applications to consume it. Once you start adding in all the essentials, such as redundancy, mirroring, back-up, UPS, fire protection and environmental controls, things are getting pretty packed in there. Expect even more to be crammed into the data centre over the coming years, not just in the areas outlined above, but new areas such as the Internet of Things. A UK survey we conducted earlier this year amongst IT professionals found that companies are looking to invest in new technologies to support their businesses across 2020 and 2021. Improving data centre connectivity to benefit cloud services and high-performance computing were joint top at 27%, followed by incorporating 5G technology and artificial intelligence at 24%. Achieving better performance and crunching data are seen as key to competitive advantage and, as we all know, that means more processing power. Deciding how best to accommodate these needs can throw up some questions depending on the level of utilisation in your data centre. Of course, taking the decision to invest in a whole new data centre can be prohibitive on a number of grounds, including real estate and cost, so many organisations are looking to increase the density of servers within existing spaces. Depending on how well stocked your racks are already, this can create problems. One of the biggest challenges with this approach is cooling, but changes in regulations around gases may create an opportunity for organisations to consider changing their approach to cooling, which will not only ensure they are conforming to European Union (EU) regulations, but make their contribution to reducing the impact of climate change, and achieve more densely packed racks whilst keeping everything within optimal environmental specs. Changes in the F-Gas market We are all very aware of the changes taking place in the refrigerant market as the industry seeks to make its contribution to reducing the use of fluorinated greenhouse gases (F-Gases). The EU quota system is being enforced by the UK Environment Agency, and means that only companies with an EU quota can supply the gases.

R22 refrigerant has been in the process of being phased out for a long time and, as of January 1 2020, became illegal to both manufacture and import. Many had already switched to R407c before this date and it too has now been replaced by R410a, a high pressure gas, so less of it is required in systems. With each of these steps the preceding gas increases in price and becomes harder to find. R410a has been used for many years, and though its production is being reduced, it is still in many larger systems today as they are not ready for the next gas, R32, as the compressor technology is not quite available for those systems. It is however, already making its way into smaller systems such as UPS room wall units. Ultimately, between the years of 2015 and 2030, the CO2 equivalent of the HFC refrigerants in use will have been cut by 80% and this will be a fantastic achievement in terms of the environmental impact of cooling. The challenge is that the current reduction in R410a does not have a replacement for all applications. This has led to some manufacturers working to find other ways to reduce the amount of gas they use. For example, creating office systems that continue to use gas, but also water in fan coils, where gas would have been used in the past. The liquid cooling option The need to get more from our data centres, and the regulatory changes that are taking place, leave companies wanting to upgrade their cooling systems with an interesting choice. Do you continue doing everything needed to maintain the gas-based cooling systems that you already have, or do you plan for a transition that will help increase the density of your data centre, and meet future demands. Liquid cooling does not need to revolve around some ‘secret sauce’ ingredient. Water is 3,400 times more efficient than air at removing heat, and rack level extraction can be very straightforward. Another benefit of liquid cooling is that it can be implemented as you expand the data centre, there is no need to wholesale remove existing gas-based systems, which would be a huge CAPEX expense. There are two principle methods of cooling; the first is direct-to-chip cooling which is often used with CPU and GPU units. As the name suggests, it draws heat directly away from its most intense source and is very efficient. It allows servers to be much more densely packed as the heat is drawn away in the liquid before it escapes into the air, building up and impacting surrounding components. The second method is immersion cooling, which can be used with a range of devices such as solid-state drives, and involves placing devices in a liquid dielectric bath. That may sound like madness, but dielectrics are used as insulators in a range of high-voltage applications including sub-station switchgear and transformers. Working with immersion cooling requires a very different approach when it comes to maintenance, and some devices are not suitable without reconfiguration, but again it is nothing that cannot be overcome. The cool reality Undeniably, cooling technology is moving in a direction that will create huge opportunities for improved data centre density, and a much greener industry and wider world. Liquid cooling will become commonplace in the data centre. The choice for data centre operators is to start planning their transition away from F-Gas. It might take 10 years, but, as we all know, data centres are long-term investments and that 10, will soon become five.

June 2020 www.datacentrereview.com 17


Top questions for exploring colocation in 2020 Since the corona chaos, organisations across the world have reaped the benefits of colocation for mission-critical equipment in data centres. Here, Pat Morley, VP of Global Product Management at Sungard AS, highlights some key colo questions.


y using colocation, customers benefit from outsourcing many of the most challenging – and costly – aspects of running a data centre. Most organisations do not specialise in building data centres and IT infrastructure themselves. Quite the contrary: tech businesses, for instance, want to focus on innovation and growth, and government organisations seek to serve the public in the best way they can. To assume the burden of establishing, and maintaining, a data centre falls squarely outside the core competencies of such organisations, which is where colocation providers can help.

18 www.datacentrereview.com June 2020

With many companies now looking to explore colocation as a result of the Covid-19 pandemic – either in preparation for future events of this nature, or as a result of falling foul to issues with their current setup – every organisation should ask itself the following questions. What criteria are most important when selecting a high-quality colocation facility? There are multiple factors to consider when assessing colocation services, including the quality of the facility, reliability, security, physical location, connectivity, the provider’s support personnel and services offered. An important question for some under the current circumstances should be, “does the provider staff the facility and offer remote hands 24/7?” This one factor can make a major difference in both the quality of services provided and physical security. Companies should not overlook examining the audit processes and certifications of providers too. For example, have internal controls been independently audited and certified? Obviously, location will be a key consideration too, not only for ease of access, but also for disaster avoidance, geographic diversity and reduc-


tion of network latency. An understanding of the processes put in place to combat scenarios such as Covid-19 will also be important to note. These should include an understanding of what precautions and access a company’s own staff are given in such a scenario, versus the provider’s staff. Leave no stone unturned here to ensure expectations match this new reality. What aspects of the colocation decision are commonly overlooked? Customers need to resist the temptation to compromise on provider redundancy, connectivity, or the quality of facility in exchange for lower pricing. The facility itself is the foundation for how applications and data will be housed and managed. Organisations need a clear understanding of a provider’s strategies and internal practices in order to safeguard corporate IT assets. Businesses must also understand their own internal environment. A thorough understanding of workloads and knowing what is critical, can help define the level of redundancy needed. Informed decisions are needed about what is best placed where. The decision to outsource a data centre may be a long-term strategy or a mid-term strategy to migrate applications to a hosted private or public cloud platform. It can also be a tactical response due to a compelling event. In either case, choose a colocation provider that can provide expertise in managing difficult and complex transitions. For customers planning to migrate some or all their environment to a cloud platform, it is important to find a provider that can offer flexible contract terms that support a migration strategy without locking a company into a long-term commitment. Agility and choice are key. What are the determining factors to decide what to put in a colocation facility vs an internal, company owned data centre – or the cloud? Organisations face a myriad of factors in making such a decision. As mentioned, among the most important is the question of the criticality of the application: how central is it to the function of the business? Is it mission-critical? If so, consider putting such applications into colocation. A third party is likely to have a higher level of resilience and redundancy built in, ensuring enhanced levels of availability and uptime for an organisation’s most important applications. Moreover, given the greater capacity that colocation providers can offer from their global network of facilities, businesses can quickly scale as needs dictate, and take advantage of things such as lower energy costs thanks to economies of scale. One other factor to consider is equipment use over its lifespan: how often will equipment for recovery be used? In recovery use cases, colocation – or the cloud – could be a better option, ensuring costs are more in line with the expected level of use. Finally, testing and development increasingly takes places in the cloud – but the data for this is often generated from a customer’s production environment. Therefore, interconnectivity between colocation data and the public cloud is an increasingly important factor in these decisions. Do not overlook it. What sort of security checks are important? Colocation facilities typically consist of hardened, resilient structures with well-tested security protocols and procedures. Providers ensure

they satisfy the latest security certifications, while embracing the most advanced technology – such as video surveillance and multi-level access systems. So, what sort of security checks are the most important to note? At a base level, it is the physical security of facilities. Are they staffed internally or are contractors used? Where are security cameras? How is data retained? How are access rights granted into the facility and to a potential customer’s specific environment? Security considerations also need to extend to the logical layer to ensure applications and data are protected from external threats. Can private circuits be provisioned that connect to external resources? This would allow organisations to bypass risks associated with public internet.

Customers need to resist the temptation to compromise on provider redundancy, connectivity, or the quality of facility in exchange for lower pricing What about disaster recovery considerations? What has the response to Covid-19 been? By doing homework on disaster recovery readiness, companies will statistically reduce the probability of a physical outage or other disruption. Colocation providers experienced in providing both production and secondary sites will ensure these are mapped to complete needs. One disaster recovery best practice that is sometimes overlooked is testing. Companies should commit to regular testing to be sure they can quickly recover critical applications and associated processes. Considering Covid-19, data centre workers have been defined by both the UK and Irish governments as essential workers. Best practice should be that all deliveries and visitors to the site – including staff – are handled by isolated security staff following specific guidelines. Employee health must remain the priority, so implementing split workforce practices too will reduce the risk of contact amongst employees. Be sure to ask a provider what their stance is on all these factors. What companies should consider for 2020 and beyond With significant benefits of scale, organisations should examine the range of factors in their decision-making process, from reliable facilities that offer uptime, conditioned environments, physical security, carrier neutrality and diversity and high-quality networks. There are many benefits to colocation. But among the greatest are more effective use of capital; higher availability for mission critical applications through power redundancy; cooling; and finally, scalability and growth. Ensure your cloud journey is best supported, whether that is through flexible commercial agreements, migration support or managed public and private cloud services. Be sure to find something that accommodates colocation needs beyond the original requirement to guarantee success today, and tomorrow.

June 2020 www.datacentrereview.com 19


Colo pay as you grow

20 www.datacentrereview.com June 2020


Louis McGarry, sales and marketing director at Centiel UK Ltd explores the benefits of a pay as you grow approach to colocation.


or colocation (colo) data centres hosting cloud and network services for a variety of clients, the goal is to provide a safe and secure environment which protects the critical load in the most cost-effective way possible. It is necessary to supply the correct resilience level and, in some cases, meet the requirements of a Tier two, three or four data centre and have the right level of security. So essentially the way that the colo is designed can affect the cost of space for clients. There are fixed costs such as the building, in addition to variable costs which include the equipment. To reduce variable overheads, the data centre can be decentralised into data halls so individual areas can be isolated, and cooling and Power Usage Effectiveness (PUE) managed. In the past, colos have tended to install large stand-alone monolithic UPS blocks from day one. However, we live in changing times. The challenge with a large stand-alone UPS of say 500kW, is that it is oversized unless the data centre is at capacity. Oversized systems cost more to buy, more to run and more to maintain. Data centre managers have rightly started to ask: why install 1 MW of UPS from day one when you haven’t sold the space? An alternative approach is to install a modular UPS where you have the ability to install the fully rated frame or empty carcass. This gives you the option to add the required number of modules to suit the actual load

and further UPS modules can be added only when needed. All the individual modules are effectively a UPS in their own right, all containing a rectifier, inverter, and static switch and all operating online in parallel with each other. This offers much more flexibility. In this way, a ‘modular approach’ helps minimise the need for a large initial investment in a stand-alone solution: colos can literally pay as they grow. It means colocation data centres can scale as they sell the space. As more clients are on-boarded, the cost of more UPS modules is offset. As well as significant cost savings, adopting a modular approach to UPS installation offers the ultimate in system flexibility too. Modules can be re-deployed or moved around the facility as needed. If one client moves on and another arrives and the module ratings are standardised (say for example all 50kW), then they can simply be re-used elsewhere.

A ‘modular approach’ helps minimise the need for a large initial investment in a standalone solution Maintenance is also easier with a modular UPS solution. Compare a fault with a stand-alone UPS which needs to be switched off to fix onsite. The call out time for an engineer is at best usually four hours, then it could take them six to eight hours to fix the system. The paralleled UPS (N+1) will support the load but this is a risk as there is no back-up during this time. By contrast, if there is a problem with a modular solution, modules can be swapped in moments and repairs completed later off-site. There is no need to revert to external maintenance bypass and raw mains, the UPS can remain live while the module is changed over, as the other modules support the load. We have also provided first level response training to some of our clients, so if they hold a spare module on-site, they can easily ‘hot swap’ if necessary. This means modular UPS offer much greater levels of availability than stand-alone systems. It was the Greek philosopher Heraclitus who once said, “The only thing that is constant is change”. If nothing else, recent events have shown that no-one knows what the future holds. However, we do know that the need to store data will continue and will increase. The rise in edge data centres may see the role of cloud facilities change, or we may find the introduction of 5G results in the further acceleration of the accumulation of data, to the extent that colos need to change the way they work in order to keep up with demand. The deployment of modular UPS technology will enable colos to have a flexible, agile way to accommodate changing demand through constantly right-sizing systems. Minimising costs connected with UPS purchase, ongoing running and maintenance costs, can all contribute towards maximising returns. By working closely with design teams, we can help colos re-think, re-use, and re-manage infrastructure to reduce waste and overall expenditure in this way. For many colos, adopting a pay as you grow approach may offer the competitive advantage needed to thrive in our ever-changing world.

June 2020 www.datacentrereview.com 21


Cache me if you can With our online consumption at an all-time high, Lars Larsson, CEO at Varnish Software explores the environmental impact of our actions and how we can look to utilise the web in a more sustainable way.

22 www.datacentrereview.com June 2020


ur lockdown lives have had a clear environmental impact on the physical world around us. Social media is awash with pictures of clearer waters and blue skies as mother nature gets some respite from the hustle and bustle of human life. Sustainability has rightfully shot up the corporate, political and social agenda over the last few years, with no small thanks to my fellow Swede, Greta Thunberg. There have been numerous calls for more government action, but also increased individual education about the effect of our daily decisions and the cumulative consequences they can have for our planet. Air travel, driving and single-use plastics have all taken their turn under the energy-efficient spotlight. But, with the world effectively on pause, perhaps now is a good opportunity to educate about the environmental impact of the web as well. It is well reported that internet usage is booming during the current global lockdown, as we all turn to video technology to replace our schools, offices, gyms and pubs. However, it is easy to forget


that our internet society is enabled by physical infrastructure, and very few are aware that every social media scroll, ecommerce click or video social session also contributes to climate change and the generation of CO2. Clearing up ‘dirty streaming’ You may have seen that the BBC recently tried to address this, by releasing a programme earlier this year entitled, Dirty Streaming: The Internet’s Big Secret. The idea behind the programme was to bring more attention to the environmental impact of the internet, and in particular, highlight how the surging popularity of streaming services, the steady rollout of 5G networks and the growth of cryptocurrencies were causing huge rises in data use which harms the planet. While it is great that a public service broadcaster like the BBC wanted to bring more attention to this issue, there were a number of discrepancies in the show which did more harm than good in terms of educating the wider public about our online consumption habits. Our increasing internet usage, including streaming services, is absolutely one of the drivers of data centre growth, and data centres are large consumers of power. However, the programme made multiple misleading claims about how data centres depend on fossil fuels, when in fact the industry is world-leading in its adoption of renewable energy.

Caching software can be easily incorporated into content delivery networks and can cache virtually all types of content It also greatly exaggerated the power required to stream popular content, such as Justin Beiber’s 2017 song Despacito, claiming an oft-cited false statistic that five billion YouTube streams of the song consumed as much electricity as five African countries in a single year. While streaming video does require a noteworthy amount of power, the true impact of Beiber’s number one hit was about 200 times less than stated. The documentary also claimed “5G will stimulate an explosion in energy demand”. 5G will enable a host of exciting use cases, the ‘Internet of Things’ and even faster content delivery than we have today, all of which requires more infrastructure and the building out of ‘the edge’ of the network in hyperlocal data centres. But this will not cause an ‘explosion’ in demand. 5G is significantly more energy efficient than 4G, which was more efficient than the 3G networks which preceded it. There is also the potential for 5G to enable macroeconomic efficiencies through smart city technology, reducing traffic congestion for example. It’s a shame the BBC took this approach instead of educating both businesses and their consumers about the small steps we can take to reduce the environmental impact of the web – beyond just using it less often. That isn’t to mention the rather hypocritical choice of broadcasting the show on BBC Three, the online-only platform, which somewhat watered down the point.

Emissions critical The documentary did make some good points though and was right to be shedding light on an environmental issue that often gets lost behind the more obviously visible contributions from aviation and FMCGs. So, what can be done? Firstly, there does need to be a greater education of the impact of the web for the general public. A recent survey of people in the Nordics by Kantar showed 66% would support the eco-labelling of digital services in a similar way to how other products are labelled.

One of the fastest and most efficient ways to reduce your online CO2 footprint is caching software In Sweden, we’ve invented the word ‘flygskam’ to convey the environmental guilt felt when flying. Perhaps we need an equivalent for our online behaviour if we’re going to take the issue as seriously. This would be a start in giving consumers a more informed choice over which brands they want to spend time and money on, and hopefully cause brands to alter their environmental focus as a result. Those businesses that do choose to take a more detailed look at their digital environmental impact, should start with their website. Many businesses operate extensive websites, loaded with a wealth of information and rich media content. This large web footprint relies on a bank of servers that use considerable amounts of energy and have associated carbon emissions. Our dependency on the web will only increase, therefore so will those emissions. One of the fastest and most efficient ways to reduce your online CO2 footprint is caching software. Through caching, digital content doesn’t have to be recalled or reproduced every time a visitor to a website asks for it. Caching is essentially a Xerox machine for online content, producing hundreds of thousands of copies per second from a single server. One major advantage of deploying this technology is a reduction in the amount of computing power demanded of servers. This, in turn, cuts the number of total servers required, resulting in lower energy expenditure and, crucially, CO2 emissions. Good news for the environment, but also for the website user who will be met with faster response times. Unlike re-engineering a new low-emission aircraft, rethinking the approach to website content delivery is a low-cost, straightforward process which has immediate impact. Caching software can be easily incorporated into content delivery networks and can cache virtually all types of content – including from the world’s busiest websites and streaming platforms. Any online business in today’s digital age needs to think about how its website or online platform can operate effectively for customers in a way that doesn’t consume so much energy or compute power. We’ve seen websites for large companies produce up to 500 times more CO2 than their global competitors, and once consumers become more aware of these issues, this will undoubtedly factor into their purchasing decisions. As we adapt into an increasingly digital society, demanding the instant streaming of content and bite-sized news on social platforms, our usage of the web will come into greater focus. It’s important then, that the web’s impact on the environment follows closely behind.

June 2020 www.datacentrereview.com 23


Going full circle When it comes to tackling data centre e-waste, Ali Fenn, president at ITRenew, explores how a circular approach can maximise the economic life of hardware, reduce costs and minimise the environmental impact of the data centre. ata centre growth has been explosive over recent years – as has data centre waste. Carbon dioxide (CO2) emissions from the digital environment in Organisation for Economic Cooperation and Development (OECD) countries have risen 450 million tonnes since 2013, according to Shift Project. Since 2000, e-waste has grown from 20 to 50 million tonnes per year, and is on pace toward a staggering 120 million annually by 2050, according to a 2019 report from the Platform for Accelerating the Circular Economy (PACE) and the UN E-Waste Coalition. If these trends continue, the Semiconductor Industry Association projects computer power consumption will exceed global energy production by 2040. It’s time to adopt a new way of thinking about the economic and environmental value of data centre equipment and how that hardware is managed. ITRenew has developed a Circular Data Centre business model, based on detailed research, to maximise the economic life of hardware in the data centre environment. Adoption by the global IT hardware industry can lower total cost of ownership (TCO) by 24% or more, and decrease the greenhouse gas impact of the data centre industry by 24% or more. Moving to a Circular Data Centre model isn’t an abstract future concept. It’s happening right now. The technology, engineering expertise and business models are all in place, and the transformation of the data centre economy is occurring even as you read this.


Data growth The increasing number of data centres worldwide has been driven by substantial growth in data handled – from 33ZB in 2018 to an expected 175ZB by 2025. On a physical level, this expanding infrastructure manifests as data centres of all sizes, from modular containers to “hyperscale” facilities – centres with the ability to scale rapidly to meet increases in demand – that enable flow, storage and availability for all users. Due to this growth, the data centre market is projected to be worth over $520 billion in 2023, which is more than double the current market size. There are currently over 75 million servers in use in large data centres, but 46 million of those will reach the end of their working lives and need to be replaced within the next three years. While great progress has been made in improving energy efficiency, there is still a lot of work to do to establish the wide-scale reuse of retiring data centre equipment. The data centre industry and the circular approach The lifecycle of data centre hardware can be divided into pre-use, use, and post-use phases, following common life cycle analysis (LCA) meth-

24 www.datacentrereview.com June 2020

odology. The pre-use phase includes raw material sourcing, manufacturing and deployment of products. Emissions that derive from processes and materials in this phase are called embodied emissions. Emissions from the operational use phase include all those that occur while running the equipment in the data centre. The post-use phase covers all recycling and end-of-life (EoL) processes, which also generate emissions (not counted as embodied emissions). While the use phase is important, there needs to be much greater concentration on the environmental burden caused by producing data centre equipment during the pre-use phase of the hardware. Quantifying the carbon footprint of these wasteful practices is time-consuming and expensive due to a lack of extensive data sets as well as complex and very secretive supply chain practices. To overcome these barriers and help the industry run better analyses, we applied academic methods to collect, evaluate and use data for our model.

Since 2000 e-waste has grown from 20 to 50 million tonnes per year and is on pace toward a staggering 120 million annually by 2050 The purpose of moving away from the linear “take, make and dispose” economy is to keep products at their highest value for as long as possible – a core principle of the circular economy. For nearly two decades, ITRenew has worked closely with hyperscalers, enterprises and service providers. Through these relationships and direct access to hyperscale technology, ITRenew has proven that circularity can be applied effectively to this sector. Our approach not only delivers financial and sustainability returns, it opens up $50 billion annually in new financial opportunities for the largest market players, while democratising access to premium technology and making it affordable for the broader ecosystem. Hyperscale operators run short, aggressive server refresh cycles for good reason, continuously pushing the bounds of density and efficiency required for their scale and growth trajectories. These cycles, however, are unnaturally fast relative to the actual life of the technology. This creates compelling new needs to maximise the financial and sustainability value of those assets. Exploring the sustainable reuse of data centre equipment led ITRenew to develop a model that makes high-performing technology available to a broader market, extending its lifetime value. The innovation is in modelling and enabling aggregate lifetime value, via creation of multiple, cascading loops of life. After its primary use by a hyperscale operator, many data centre operators can benefit from


Figure 1: The concept of a circular economy, according to the Ellen MacArthur Foundation

reengineering that equipment for secondary and tertiary applications. It’s a win-win, enabling extraction of value for reinvestment upstream, and premium technology at dramatically lower total cost of ownership (TCO) downstream. The impact of data centres on global warming The main driver of emissions in the use phase is electricity consumed by the servers. The carbon intensity of these electricity (grid) mixes depend on geography and the forms of energy used to generate electricity (such as renewable sources). Data centres that run on 100% renewable energy – or employ comparable forms of low-carbon electricity provision – will experience further environmental benefits through the circular economy model. In ideal conditions, with green grids or a high usage of renewable energies, embodied energy makes up the majority of consumed lifecycle energy. To measure the circular model effects on the overall data centre ecosystem, we compared it against a business as usual (BAU) scenario, assuming that servers are initially employed for three years in a hyperscale facility, whereas the average smaller data centre employs a server for nine years in its facility. We worked out that, in a circular economy (CE) environment, 2.18 servers would be needed to handle the same workload as 4.18 servers in a BAU set-up, representing a potential saving in GHG emissions (in CO2e equivalents) of 24%. This creates additional benefits beyond reductions in CO2 emissions; for instance, we found that TCO savings between BAU and CE could be as high as 31%, even calculating Moore’s Law compute efficiencies of newer hardware (such TCO savings would not be experienced by a single data centre operator, but represent

system-level aggregates – it’s possible that a single data centre operator might see even greater reductions). What does it take to enable this opportunity? While large in scope and potential, these aren’t high-concept ideas or some Silicon Valley science project. The operational application of ITRenew’s efforts are ongoing and far reaching, and the impact for data centre owners, enterprises and the broader IT ecosystem is significant and material.

In a circular economy environment, 2.18 servers would be needed to handle the same workload as 4.18 servers in a business-as-usual set-up Open hardware systems are in place, engineering expertise is widely available, and go-to-market connections extend from upstream to downstream stakeholders. The time is now to create an ecosystem-wide, global circular data centre economy that will enable us all to realise not only significant environmental benefits but also tangible financial gains. The adoption of a circular data centre economy is our opportunity and imperative, please join just to establish this as the new data centre standard.

June 2020 www.datacentrereview.com 25


Back to basics Focusing on the dull to power the exciting, Jonathan Bridges, chief innovation officer at Exponential-e, takes a closer look at the core, the cloud and the edge and explains why going back to basics is key to future-proofing our nation.

26 www.datacentrereview.com June 2020

n organisation’s capabilities and services sit in data centres, in the cloud, and at the edge. Working together, this ecosystem forms an engine that powers every move a business makes; each part is crucial and plays an essential role in processes and innovation. Typically, however, when we think of exciting and emerging technologies, it isn’t the data centre or the cloud that we focus on. Our attention – and, increasingly, investments – are dedicated to operations at the edge. In short, edge computing connotes the geographic distribution of the data in question. Edge computing is computing that’s carried out at or close to the source of the data, instead of in the data centre-fuelled cloud. Rather than negating the cloud, edge computing entails the cloud coming to the user. Artificial intelligence (AI) and Internet of Things (IoT) technologies are good examples of this. We are rightly fascinated by everything from autonomous cars and digital assistants to health trackers and smart homes, throughout the workplace and the home – technologies that have become ubiquitous in everyday life, either in reality or discourse.



working tools and cybersecurity protection, to cloud storage for corporate files and expense management systems. As the Covid-19 pandemic shakes up businesses around the world, conversations are taking place on how to improve core technologies in order to support employees who are working remotely and prepare businesses to manage the crisis. Moreover, the Government recently reported that cyber criminals are increasing phishing attacks to exploit worries over coronavirus. Evidently, those who neglect the core and focus solely on the edge risk not only undermining operations but making an organisation more vulnerable. This is not to say that businesses should refrain from developing innovations powered by edge technologies. On the contrary, it’s paramount to remember that – as well as everyday business operations – a healthy, strong core is essential for powering innovation at the edge. Consequently, establishing a strong core and being confident in its resilience will enable a business to chase the latest innovations in AI and IoT.

It’s important to highlight that going back to basics doesn’t preclude progress

Focus on the core to keep the lights on However, this narrow focus may come at the expense of less exciting but wholly vital technologies. By concentrating on operations at the edge, people risk overlooking core technologies such as cloud and network infrastructure, which are actually powering the business and keeping the lights on. In short, the core is critical for business agility and continuity, which means investing in robust servers, storage, software, and networking equipment. It is these key components that are vital to keep pace with rapidly shifting technology. Imagine a marathon runner here: they need a strong core to drive their legs forward for 26 miles. For those who enjoy the thrill of the race, strength and stability training can feel tedious – but it’s essential to avoid injury. Sprinting towards the latest technology without a proper infrastructure to support operations will similarly cause businesses to fatigue, preventing them from achieving the most from a technology. Connectivity and Covid-19 Core technologies enable a business to perform its responsibilities in a cost-optimised way, delivering goods and services that are essential for ensuring the inflow of cash and business continuity – particularly while the nation is on lockdown thanks to coronavirus. These services can encompass everything from Unified Communications (UC) remote

Don’t get carried away by buzzwords The core is multi-faceted – it ensures the continual drum beat of everyday business operations and this strength powers the development of new technologies. As such, the edge is only ever as good as the core. Without a strong foundation, pursuing edge technologies could prove to be damaging. The core shouldn’t be overlooked for the sake of chasing buzzwords and specific activity targets. It’s understandably easy to lose sight of what’s necessary when the pace of change moves so fast. If you’re not constantly looking out for the next big thing in technology, it can feel as though you’re losing your competitive advantage. But trust in the process. Pursuing new developments at a super-speedy pace while the business has a weak core that cannot support ongoing activity is more damaging than briefly stalling innovation progress. Dedicate sufficient resources, time and energy to improving the cloud and strong, sustainable progress will follow.

Dedicate sufficient resources, time and energy to improving the cloud and strong, sustainable progress will follow One step back for two steps forward It’s clear that what you do in the core and how it is maintained has a knock-on effect on operations in the cloud and at the edge. As such, it’s important to highlight that going back to basics doesn’t preclude progress. In fact, the opposite is true. Building a solid infrastructure in the data centre to guide and manage operations will enable a business to successfully innovate alongside the latest AI and IoT developments. Ultimately, when it comes to times of difficulty, it is not technology on the edge we look to support a business. Instead, it is the core, the central source of support that keeps operations running smoothly – and the key to future-proofing our nation.

June 2020 www.datacentrereview.com 27


The edge: What’s your game plan? Sim Upadhyayula, Sr. director of solutions enablement at Supermicro, explains why rolling out to the edge requires a cohesive strategy to ensure success.

he data centre scene has seen a huge shift over the past few years. We have seen more businesses wanting to move compute processes to the edge where the action is happening, as opposed to carting the data to a central location with mass computing infrastructure. When speaking to UK prospects about edge solutions, it becomes clear that the main priorities for them are essentially three things: availability of a total solution – meaning hardware and software, the compute power per dollar and a provided flexibility in the design. And it’s worth mentioning, how many customers are surprised at the compute power that is available from modern embedded system designs. Let’s have a look at the primary drivers for these developments: these are firstly, the increase in data which is being generated by a number of connected IoT devices. Secondly, there’s the cost-benefit analysis of moving the data, coupled with high latencies involved, which makes such movements prohibitive from a performance perspective. This shift is likely to accelerate further in a post-pandemic world, where new restrictions and regulations are likely to be clamped on manufacturing, transportation and service sectors.


Proper planning is essential when selecting and deploying infrastructure both at the edge and the data centre to ensure successful business outcomes As we move into autumn, the new business reality will force companies to respond with strategies that include increased automation, collection and parsing of data as well as taking corrective actions right at the point of interaction rather than at some far away centralised location. AI and machine learning technologies will step up to facilitate this, with greater modelling at the core for scale and greater inferencing at the edge for speed. This means that there will be growth in infrastructure requirements at the edge and the core data centre along with cloud computing. Given the increasing number of requirements, and lessons learned from the inefficiencies of ‘silo-ed’ environments (particularly for enterprises) it’s our obligation that we don’t repeat the same mistakes. Proper planning is essential when selecting and deploying infrastructure, both at the edge and the data centre to ensure successful business outcomes. Especially since it is very likely that today’s applications and workloads will be making the transition between edge, core and cloud multiple times.

28 www.datacentrereview.com June 2020


Unique challenges require unique solutions Compared to core data centres, there are some challenges that are unique to the edge and not typically seen anywhere else. These include: • A lack of resources that are needed to provide an adequate sterile HVAC environment. • The assurance of good clean power (meaning free from voltage spikes and drops). • Connectivity or a lack of skilled administrators. A few vendors have found solutions that address some of these issues. But choosing a technology that only tackles parts of these challenges without having the scale to extend to the core leads to similar ‘silo-ed’ problems as we’ve seen in the past.

Good thermal and power-efficient designs will help balance meagre resources at the edge while saving operational costs both at the edge and core One way to approach this problem is to embrace technologies that provide similar benefits across the edge and core data centres. Features like free air cooling, modular building block design, disaggregated computing, and simplified networking are a few examples that could deliver those benefits. Choosing infrastructure designed by organisations obsessed with engineering, will help withstand the temperature fluctuations better in edge environments by providing that extra insurance. Innovative solutions like specialised coatings on the motherboard will help alleviate the corrosion and intermittent connectivity issues posed by moisture and other harmful contaminants, which can also increase the life of these systems. Good thermal and power-efficient designs will help balance meagre resources at the edge while saving operational costs both at the edge and core; and fulfilling the commitment to a greener environment. Familiarity with one platform (like hyper-converged infrastructure) that is scalable at the edge as well as the core, will greatly mitigate the issues regarding the ability to deploy and to service the technology. This cuts down on the time, money and effort needed to train staff on different infrastructure platforms. It also simplifies debuggability while greatly alleviating inventory challenges of keeping multiple, diverse spares handy. Choosing the right path is key Looking at the UK, there’s a strong trend developing towards edge computing. Sectors like telecoms and industrial manufacturing are especially focusing on this move. Despite the core and edge market being quite different from each other, they’re also linked in certain points. Which is why it’s crucial to plan ahead. As discussed at the outset, there are multiple forks in the road on our way from the edge to the core to the cloud.

Making the right choices and turns at the beginning can make the journey far more productive and less painful. Formulating one cohesive plan for scaling the workloads across edge, core and cloud is critical to success. And choosing the right partner for the infrastructure, who can deliver on all fronts including the much-needed scale, can be another key to accomplish the task.

June 2020 www.datacentrereview.com 29


RittalXpress facilitates fast, flexible IT infrastructure upgrades – making it as easy as A,B,C As ever-more demands are placed on our IT systems, so there is a pressing need for upgrading existing infrastructure and for these installations to be done in the shortest possible time to allow businesses to remain competitive, says Emma Ryde, Rittal product manager for IT and Industrial Enclosures.

ustomers are increasingly demanding racks that can accommodate the future requirements of server and network gear as these needs evolve. This places huge pressure on IT and clearly increases the need to employ flexible, easy-to-


30 www.datacentrereview.com June 2020

install rack systems that can be used for many different applications. Meanwhile, the process of ordering and configuring racks also needs to be made as easy as possible. To this end, Rittal has created RittalXpress, which offers rapid distribution of a specially selected range of its most popular

TS IT racks and accessories. Rittal’s TS IT data centre solutions gives IT users unrivalled structure and support through its flexible design and wide array of accessories. Its TS IT racks offer an efficient internal space, variable widths and depths, and baying versatility. Their standard parts allow users


We’re making express delivery of the TS IT Rack as simple as ABC. A: Choose your rack, B: Choose your accessories and C: Choose your delivery Emma Ryde, Rittal product manager for IT and Industrial Enclosures

to configure solutions that are tailored to individual data centres and allow for efficient cabling and thermal management (to protect sensitive electrical equipment from rising temperatures as the number of internal components increases). Meanwhile, the RittalXpress delivery service ensures that solutions arrive on-site quickly. “We’re making express delivery of the TS IT Rack as simple as ABC. A: Choose your rack, B: Choose your accessories and C: Choose your

delivery,” comments Emma Ryde, Rittal product manager for IT and Industrial Enclosures. “The RittalXpress service offers a carefully selected range of TS IT network/server rack options, plus accessories. These items are available either for 24-hour despatch for unassembled units, or seven working days for configured units,” adds Emma. Further information can be found at www. rittal.co.uk and www.friedhelm-loh-group.com or on twitter @rittal_ltd.

June 2020 www.datacentrereview.com 31


The right fit When it comes to cloud adoption, it’s not just about being smart, it’s about being right. Just because a cloud solution claims to be ‘smart’ doesn’t necessarily mean it’s the right fit for your organisation. Vicky Glynn, product manager at Brightsolid, explains why your cloud environment should be as unique as your business.


he idea that there is one ‘perfect’ cloud destination is a bit like looking for a pot of gold at the end of a rainbow. You think you are heading towards it, and realise you never get to it as it does not exist. As a tech industry we are guilty of latching onto the ‘cloud du jour’, which suggests that there’s a particular version that we think is the ’smartest’. We especially love this when it translates into the latest trend think ‘cloud-first’ or even ‘cloud-only’. But rather than looking for a cloud nirvana that might not exist, it is far more important that organisations recognise that they and their infrastructures are unique. This is especially true when it comes to deploying the cloud; a restrictive approach to the cloud that’s born out of an attachment to a strategy you once decided was ‘smart’ could mean that you don’t get the right solution for the organisation (and its needs). Alternatively, you might find that your deployment never gets out of the pilot stage because what was initially considered ‘smart’, is suddenly out of sync with more recent demands. Instead, the correct approach is likely to be found by building and establishing the right foundations for the cloud, to ensure that you can be agile in delivering organisational benefit, while meeting security and compliance needs. The success of a new strategy is dependent on its adoption within an organisation. It does not matter how much has been invested in it; if it does not have a natural fit with your organisation’s culture, then it will likely not last. The same applies to cloud strategy. It is important to select a cloud strategy that has ‘buy-in’ from your team. They need to understand why it has been selected and how it will benefit them and the work they do. Once that has been established, it is more likely to be used and adopted. This is why hybrid cloud has resonated with so many organi-

32 www.datacentrereview.com June 2020

sations; it allows an organisation to say ‘and’ rather than be restricted to ‘either/or’ and takes into account their current status and the future. Once it has the necessary buy-in, it is important the foundations upon which the cloud strategy will sit are agreed across the organisation. The cloud has driven technology decisions out of the IT department, as a broader set of internal stakeholders see its benefit and can deliver solutions out with the IT team – from a marketing team building a new website in the cloud, to the finance team moving their processes online to comply with HMRC regulations. However, this has the potential to increase risk within an organisation as these non-traditional teams who are afforded the ability to spin up and manage their own cloud may have limited experience in implementing the necessary IT security, compliance and access. This makes it even more important for these foundations to be solid – from both a security and compliance perspective – to protect the wider organisation in the long run. As a result, no matter if you follow a public, private, hybrid, or multi-approach, providing the wider organisation with a set of guardrails to guarantee a baseline of security and compliance is vital. Whether that is by the IT department enforcing a set of rules before a cloud is launched by a team outside of their own, or by using an off-the-shelf solution that can take the sting out of the initial set up and running; organisations must be clear on what must-haves they need to consider and create a cloud that is right for them, right now.

Rather than looking for a cloud nirvana that might not exist, it is far more important that organisations recognise that they and their infrastructures are unique Should the former approach be taken, bearing in mind the limited experience that some may hold when launching a cloud to support their project, there must be clarity on the individuals with the capability to manage and adapt their cloud who adhere to said rules – for example, it may not make sense for the entire finance team to have access rights. By clarifying access management restrictions from the outset, this in turn will enhance security within the cloud. But it is by no means the only way to protect the cloud. Guaranteeing data security is more important than ever. Due to its elastic nature, workloads in the cloud can expand and contract on a regular basis. As such, a level of perimeter security is vital to provide protection to the organisation and its workloads. There is a


lot of debate around whether perimeter security is relevant when it comes to the cloud or if a zero-trust model might be more suitable. However, when considering the foundation of building a cloud that is right for an organisation, if the security procedures in place adhere to governance and compliance needs, that is enough to get you started. Although, any more than that can (and in some cases should) be considered on a case-by-case basis, depending on the sensitivity of the data. The final step in establishing a solid foundation for the right cloud is centred around compliance. Like the guardrails around access management, a similar approach must be taken to guaranteeing that how you use the cloud sits comfortably with your organisation’s compliance procedures. This clarity will mitigate the varying levels of compliance requirements that organisations have – whether you are in finance or part of the public sector network. When it comes to compliance, it is vital that organisations ensure they have the appropriate controls in place from the outset. Initially, it is likely that this needs to sit with the IT team, who can have complete oversight of all activity taking place within the cloud — but depending on the disparate nature of what’s being done within the cloud, it might make sense

to consider how this can be outsourced so that it doesn’t become a full-time job for your internal team. By taking these very individual elements into consideration, organisations will be able to create and build a cloud environment that works for

Providing the wider organisation with a set of guardrails to guarantee a baseline of security and compliance is vital them. While this does involve upfront work, this clear and considered approach will allow organisations to see the benefits in the long run, as they will have created a cloud environment that is as unique as they are and doesn’t jump on the hottest cloud trend when there might have been a better option for them.

June 2020 www.datacentrereview.com 33


Industry Insight: Michael Adams, CEO & managing director, Datacentre Speak With over 40 years’ experience in the data centre industry and after a delightfully refreshing chat at last year’s DCD event in London, we just had to get Michael involved in our Industry Insight segment. In this Q&A, Michael discusses issues in the industry, the holy grail of sustainability, how we can encourage new talent, as well as sharing with us his biggest pet peeves.

What were you doing prior to Datacentre Speak and how did you first get involved in the industry? I started my career as an electronic design engineer for ICL Computers in the IT industry during the 1980s when data centres were called the ‘Mainframe Computer Room’ or ‘Machine Room’. At that time, most people who bought mainframes also bought a room from the same manufacturer to house it. The market was dominated by a few companies; IBM – who set the standard – as well as ICL, Burroughs and Univac. There was no such thing as a small data centre, anything else was a telecoms closet. Even then, it was all about cooling – the large mainframes were heat sensitive and overheating could just as easily shut them down as a power outage. Things like sustainability and energy efficiency were not on the agenda – all that really mattered was ‘uptime’. What are the biggest changes you have seen within the data centre industry over the last few years? The level of technology selection and engineering design has expanded exponentially. There really is so much choice these days and so many different approaches to solving the same challenges, it really makes you think who do you believe? Who has the optimum solution? And whose value chain can be trusted to deliver on budget and on time? These have always been questions which needed to be answered, but today the volume of information is vast, and the data centre must be absolutely fit for purpose. Many private and institutional investors have entered the market. As a result of over promises or over optimism, they have not received the returns they were led to expect. My advice to all stakeholders is do your homework. Before you invest in facilities, first invest in feasibility studies, market assessments for the specific country and exact location. Only then engage an M&E consultancy to give you concept designs. This is a competitive mature market and you can make a good investment, but a little wisdom and the right advice will help you get the best return for your money.

I admire some of the more high-profile industry personalities that have engaged with the educational sector to bring more awareness to industry

Which major issues do you see dominating the data centre industry over the next 12 months? Speed of deployment and total cost of ownership – no-one knows how fast the data centre market will grow, nor to what size. Current estimates are calculated on historic data, but the world is changing and the dependence on communication may drive new ways of both business and everyday life.

34 www.datacentrereview.com June 2020


How would you encourage a school leaver to get involved in the industry? Do you feel there is a current skills gap? I don’t think that the data centre industry has much of a public profile. The general public doesn’t know what we do and the only time they get insight is when the media airs or publishes an exposé of the energy we use and our carbon footprint. This is a great pity because we do our work with great efficiency and our product is highly relevant to the upcoming Gen X-ers, Y-ers and Millennials. I admire some of the more high-profile industry personalities that have engaged with the educational sector to bring more awareness to industry, as well as point younger people and those re-training towards the skills gap and shortages we face. There are many organisations looking for talent and this is a good time as the market is growing. What, in your opinion, is the most important aspect of a successful modern-day data centre? Cost and management of operations, as well as sustainability are key factors. How cost-effectively you can build a data centre is only part of the equation; you also have to consider total cost of ownership (i.e., the opex as well as the capex) of any facility over its entire life-cycle, including scalability, efficiency, flexibility. With regards to sustainability, with data centres using so much power, how important do you think it is for the industry to do its bit to help the impact on climate change? How can that be achieved? First of all, I think that as an industry we’ve done incredible things to make ourselves efficient, reliable and effective. We’re only around 50 years old, and it’s my opinion that we have really taken charge of these requirements without the prompting of legislators. Firstly, what would help us to be more sustainable would be if governments put pressure on the utility suppliers to cease the use of fossil fuels in power generation. Secondly, they should provide greater incentives to build and operate green. It’s a little bit shameful that data centre purchasing decisions are made on cost rather than an ethical basis. When data centre service providers invest to make their facilities as sustainable as possible, their reward should not be a loss in cost competitiveness because of the perceived premium. In any event, this is out of touch with the real world where brands often get better traction with their customers because of their green credentials. The big challenge to go to the next level will be for corporations to support data centres with a sustainable approach and execution. The technology is available BUT is business prepared to pay for it? There is speculation that the physical data centre has had its day. What are your views on that? For the foreseeable future, data centres are here to stay. You only have to look at the emerging market for edge computing to see that the number of facilities is set to rise dramatically. The predictions about the volume of data which is going to be generated outside of what we currently consider to be a data centre, means that we may need to redefine what we mean by the term data centre.

Who owns them, how they are operated, the size and location may all change, but they are not going anywhere. What part of your role do you find the most challenging? The fact that the same mistakes are made time after time in data centre design and operations. I think it’s time we saw a more collaborative effort to share knowledge and best practices, as well as data about why things fail. It’s the very definition of madness to keep doing the same things and expect different results. We can learn lessons from the airline operators and automotive manufacturers. What is the toughest lesson you’ve learned through your career, or the best piece of advice you’ve been given? The best advice I’ve ever been given is that there is strength in unity; a cohesive team which is pulling in the same direction with their eyes on the end-goal can do great things. With the right incentives, of course.

It’s a little bit shameful that data centre purchasing decisions are made on cost rather than an ethical basis

Just for fun What’s your biggest pet peeve? I’m a ‘people person’ and I like to believe that all people are good people with good intentions. Being honest and being yourself is very important. I trust in people’s integrity and am very disappointed when I’m proved wrong (especially when it’s already obvious to others). What are your hobbies/interests outside work? Music is a big part of my life and I played in a rock band for 18 years from my teens. Today I still have a ‘jam’ on my guitar with my friends. I have also sung in various amateur choruses and operas in my time. Where is your favourite holiday destination and why? I’m a nature boy – I’ve always enjoyed the African Bush and have spent many enjoyable days tracking and observing animals in their natural habitat – the wilderness. If you could travel back to any time period in history which would it be and why? I have always been fascinated by the Roman times as they were so advanced in their technology and lifestyle. I believe we’re still discovering things that the Romans had already mastered nearly 2,000 years ago.

June 2020 www.datacentrereview.com 35


Centiel achieves ISO 14001:2015 Environmental Management System Accreditation


PS manufacturer Centiel UK has announced it has been awarded ISO 14001:2015 Environmental Management System accreditation by the BSI. ISO 14001 Environmental Management is the world’s most recognised environmental management system and Centiel UK now adds this to its BSI accredited ISO 9001:2015 Quality Management System, and its BSI accredited OHSAS 18001:2007 Occupational Health & Safety Management System certifications. David Bond, chairman at Centiel UK commented, “We are now one of the very few companies in the UPS industry in the UK to hold all

three of these key BSI accreditations, enabling us to deliver more than regulatory compliance

and the ability to meet supplier requirements.” “BSI has received the official UK accreditation status from UKAS, which means it has been assessed against internationally recognised standards. This status also means that the certificates issued by the BSI, are both credible and impartial.” Centiel is known for its fourth generation, true modular UPS CumulusPower which benefits from ‘9 nines’ (99.9999999%) system availability and 97% efficiency at low loads. Centiel’s additional certifications include: Safe Contractor status; CIPS Sustainability Index and Constructionline gold level. Centiel • 01420 82031 www.centiel.co.uk

GS Yuasa launch new company catering to Nordic and Baltic markets


S Yuasa Battery Sales UK Ltd has announced the establishment of a new sales company to serve its Nordic and Baltic export regions. GS Yuasa Battery Nordic began trading from a new distribution centre in Jönköping, Sweden in December 2019. The creation of a new sales company and distribution centre in Sweden represents a significant investment from the battery manufacturer, as it aims to increase market share in the Nordic and Baltic regions. The move means that customers in Sweden,

Norway, Finland, Denmark, Iceland, Latvia, Lithuania and Estonia will be better served by the

business. Benefits include shorter lead times, local support and reduced order quantities, meaning customers can order more regularly. The new distribution centre will stock and supply GS Yuasa’s full product range, increasing year-round availability and improving on current delivery times. With a wide range of products for varied applications available, Yuasa and GS batteries are the number one choice across Europe and trusted by consumers and professionals alike. GS Yuasa • +44 (0) 1793 833555 www.gs-yuasa.eu

Schneider Electric and AVEVA extend partnership chneider Electric and AVEVA have announced an expansion of their partnership, with the two companies looking to simplify the operation of multi-site and hyperscale data centres. As hyperscale providers build data centres with an expanding fleet to meet worldwide demand, the complexities to operate and maintain these facilities are creating an unprecedented set of challenges. Operating at this scale requires a different approach for mission critical facilities powering the globe’s digital infrastructure. Through a combination of AVEVA Unified Operations Centre, as well as Schneider Elec-


36 www.datacentrereview.com June 2020

tric’s EcoStruxure for data centres, users get a homogenous view of engineering, operations, and performance across a heterogenous, legacy installed base. Hyperscale data centre providers will benefit from this partnership by connecting platforms and data sets that previously existed in disparate systems. They will also be able to scale regardless of number of sites or global location. According to Schneider Electric, data centre staff will be empowered to make faster, more informed decisions and optimise asset and operational efficiency throughout the data centre lifecycle. As a result, data centre providers can deliver a globally consistent

experience to address the expanding digital infrastructure needs of their clients. Schneider Electric • 0870 608 8608 www.schneider-electric.co.uk


Nvidia debuts first Ampere-based GPU, the A100 for data centres


vidia has announced that its A100 GPU, the first to be based on its Ampere architecture, has entered full production and has begun shipping to customers worldwide. The A100 is directly targeted at the data centre market, with the company promising the largest leap in performance to date within its eight generations of GPUs. The latest range has been designed to unify AI training and inference, with a reported performance boost of up to 20x over its predecessors. The A100 is also built for data analytics, scientific computing and cloud graphics. “The powerful trends of cloud computing

and AI are driving a tectonic shift in data centre designs so that what was once a sea of CPU-only servers is now GPU-accelerated computing,” said Jensen Huang, founder and CEO of NVIDIA. “Nvidia A100 GPU is a 20x AI performance leap and an end-to-end machine learning accelerator — from data analytics to training to inference. For the first time, scale-up and scale-out workloads can be accelerated on one platform. Nvidia A100 will simultaneously boost throughput and drive down the cost of data centres.” Nvidia • 01189 184464 www.nvidia.com

EcoDataCenter first to deploy chassis-level immersion liquid cooling solution from Iceotope, Schneider Electric and Avnet


coDataCenter’s new colocation facility in Falun, Sweden, has been billed as the world’s first climate positive data centre, and its chassis-level immersion liquid cooling solution is key to that claim. That’s because it promises to reduce rack cooling energy by up to 90% and slash overall data centre energy usage by up to 14%. Developed by Iceotope, the chassis-level immersion cooling solution enables 46kW per rack, with the core technology capable of scaling to future-proof power densities of 100kW plus. Liquid cooling improves chip

and hard drive reliability by providing a lower stable operating temperature, as well

as increasing the available white space by eliminating the requirement for hot aisle/cold aisle layouts. The cooling arrangement also enables high-grade heat to be captured for reuse in a local renewable energy scheme. “EcoDataCenter has embraced this innovative new technology as an early-adopter, knowing that companies in the market will soon see the operational and environmental benefits, and follow our lead,” said Lars Schedin, CEO of EcoDataCenter. EcoDataCenter • +46 703 884 452 www.ecodatacenter.uk

EdgeConneX’s Warsaw data centre gains support for Megaport’s SDN Cloud dgeConneX has announced that Megaport, a Network as a Service (NaaS) provider, has completed the deployment of its elastic cloud connectivity fabric at the EdgeConneX Warsaw, Poland, data centre. Megaport’s software-defined cloud connectivity solutions are designed to help enterprises reduce operational costs, increase control and scale by leveraging multiple on-demand cloud services, including: AWS, Microsoft Azure, Google Cloud, IBM Cloud and Oracle Cloud among others. So far, nine EdgeConneX Edge data centre markets worldwide have deployed Megaport’s SDN, including Portland, Santa Clara, San Diego, Phoenix, Denver, Houston, Memphis,


Munich and Warsaw. The EdgeConneX Warsaw data centre is a carrier neutral, 15,070 sqft facility with 2.5 MW N+1 of capacity. “Businesses in these markets demand the ability to choose different clouds on-demand and want a truly local way of accessing leading cloud service providers and transmitting data between locations. “Our partnership with Megaport certainly provides this. In effect, we are bringing a lower cost, more secure cloud to customers in Warsaw and an open door to other edge markets worldwide,” added Dick Theunissen, managing director EMEA at EdgeConneX. EdgeConneX • +1 (703) 880-5404 www.edgeconnex.com

June 2020 www.datacentrereview.com 37


Welcome to the 2020s The new decade is well and truly underway, but what does it mean for your data centre network? Steve Brar, senior director, Portfolio Marketing at Aruba takes a look at what’s to come. e are now firmly into 2020, and between WW3 nearly kicking off in January and the current Covid-19 pandemic, it has been a fairly crazy way to start the decade. It has been a year of big changes for us all – now more than ever we’re finding ourselves relying on tech for both work and leisure. For the tech industry, change is a constant. One bit of tech facing major changes is our data centre networks. After two decades marked by centralisation of computing and infrastructure, the pendulum is swinging back toward edge computing. According to Gartner, today 90% of data is created and processed inside centralised data centres or the cloud. But by 2025, about 75% of data will need to be processed, analysed, and acted upon at the edge.


Networking in the edge-to-cloud era Digital transformation and the harnessing of data from connected devices to create real-time, connected experiences at the edge have been driving this shift. Big changes to our networks will be required – especially as you seek to balance the new requirements of edge data centres with the ever-growing use of cloud services and any remaining on-premise footprint you have. With this balancing act in mind, here are a few considerations to remember: 1. Simplify using automation As application teams continue to adopt DevOps and other agile methodologies to speed up software development, expect networking operations to become far more automated and streamlined than they are today. Any of the solutions needed for this will need to take into account both current and future operating models as well as existing investments. For example, to simplify those common yet time-consuming configuration tasks consider turnkey automation. For teams with more seasoned DevOps practices, an absolute must will be extending common automation platforms to network-related workflows. As DevOps and agile practices are utilised more and more in application teams, expect those practices to begin to influence how other organisations within IT function.

38 www.datacentrereview.com June 2020

2. Analytics providing actionable insights Troubleshooting issues is perhaps one of the biggest drains on network operations resources. Network visibility is therefore incredibly important to shortening mean time to repair (MTTR), improving IT service delivery, and helping keep short-staffed teams focused on more strategic matters. Network-wide telemetry, both captured and processed natively on each node, will be a massive leap forward. Analytics with built-in remediation will be critical in providing better network stability and helping trouble-shooters proactively or even pre-empt user and/or business impacting issues. Predictive analytics will help anticipate issues before they arise and also help with planning capacity, especially during periods of high network usage, to ensure the network is the right size to deliver the experience users demand. 3. 24-hours of availability Even if we set aside the massive demands on networks brought about by the Covid-19 pandemic, the need for resilient networks will only grow – even a minor issue in the digital era could have huge ramifications for the business. Whilst automating day-to-day operations will remove the possibility of human error, what networking teams will also need is a far simpler, more reliable way of ensuring availability – whilst also delivering non-disruptive upgrades. A cloud-native, microservice-based operating system could be an ideal solution, ensuring resiliency at a software level and being able to enact live software upgrades to replace maintenance windows will also be key. Conclusion From automating tasks, to actions from predictive analytics to being able to better offer full 24-hour availability; as we shift away from the centralised model of data centres we know today and back towards the edge, these are just some of the areas that will be at the forefront of this change. Now, it is very likely that you’ve been taking steps towards this new era of data centre networking. However, we hope that the considerations laid out above provide you with some useful ideas on how to approach the various changes as they come.

If you sell products or work on projects within Power, Lighting, Fire Safety & Security, Energy Efficiency and Data Centres, make sure you enter:

Visit awards.electricalreview.co.uk Your business and your team could be celebrating in the spotlight at the ER & DCR Excellence Awards Gala Dinner on 24 September 2020 at the breathtaking Christ Church, SpitalďŹ elds in London!


Entertainment Sponsors:

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.