Green IT & Sustainability
Storage, Servers & Hardware
When it comes to cooling your facility, could you be overspending?
The hidden costs of storage management.
Final Say Facial recognition: Is the technology already out there fit for purpose?
News 04 • Editor’s Comment Get ready for DCR Live.
06 • News The latest stories from the sector.
Features 12 • Edge Computing Ali Fenn of ITRenew explores ways to optimise sustainability in edge data centres in order to satisfy the growing demand for green.
20 • Green IT & Sustainability When it comes to cooling your facility, could you be overspending? According to Anuraag Saxena of EkkoSense, you probably are.
24 • Colocation & Outsourcing With the colocation space becoming ever more saturated, David Fiore of Opengear discusses how colo providers can differentiate themselves in an already crowded marketplace.
30 • Storage, Servers & Hardware
Quantum’s Eric Bassier explores the hidden costs of storage management and how modern storage solutions can add business value through data insights and automation.
Regulars 34 • Industry Insight It’s a fact of life, accidents happen, and we humans are not infallible. Here Philip Bridge of Ontrack explores how to protect your data should the worst happen.
36 • Products
Innovations worth watching.
38 • Final Say Facial recognition technology has long been touted by law enforcement as helping to fight crime. But is strict regulation needed to prevent serious human rights violations?
Editor’s Comment Greetings Reviewers and welcome to Q2 of Data Centre Review. By this point, I can almost smell the freedom and the peroxide, and damn it smells good. As always here at DCR, we have been squirreling away to bring you guys some more new stuff, because to be honest, after the last year, I’m sick of doing the same sh*t every single day. It got really old for me about three months in, to be fair. In my defence, I started off strong, during lockdown round one, I was jogging all over the place, eating healthily and generally trying to stay virus free (well that was a success – not). Fast forward to now, I’ve barely left the house and don’t think I will ever be able to wear ‘work’ clothes again. Anyway, back to the new stuff. On June 29 and July 1, we shall be bringing to you DCR Live 2021, a virtual conference spread across two (non-consecutive) days hosted by yours truly – sorry about that. DCR Live will be packed full of expert speakers and key players from within the data centre industry, offering up their opinions on some of the most prevalent industry issues of the moment. Across this two-day event, we have an exciting agenda lined up, which by the time this goes to press, should have been revealed. We will be covering topics such as: sustainability, 5G, edge, closing the skills gap, life after Huawei, and the dreaded B word. No, not Boris, the other one (Brexit), to name but a few. And yes, it’s a little while away yet, but it doesn’t hurt to be prepared. Over the coming weeks, we will be announcing new speakers and sponsors, so keep an eye on our social media pages @elecreviewmag @DCRmagazine on Twitter, as well as our weekly editor’s newsletters and LinkedIn accounts. So, get DCR Live in the diary, and we hope you enjoy the issue! Ciao for now. Claire Fletcher, Editor
Claire Fletcher firstname.lastname@example.org
Jordan O’Brien email@example.com
DESIGN & PRODUCTION
Alex Gold firstname.lastname@example.org
GROUP ACCOUNT DIRECTOR
Sunny Nehru +44 (0) 207 062 2539 email@example.com
Kelly Baker +44 (0)207 0622534 firstname.lastname@example.org
Wayne Darroch PRINTING BY Buxton Paid subscription enquiries: email@example.com SJP Business Media 2nd Floor, 123 Cannon Street London, EC4N 5AU Subscription rates: UK £221 per year, Overseas £262 Electrical Review is a controlled circulation monthly magazine available free to selected personnel at the publisher’s discretion. If you wish to apply for regular free copies then please visit: www.electricalreview.co.uk/register
Electrical Review is published by
2nd floor, 123 Cannon Street London EC4N 5AU 0207 062 2526 Any article in this journal represents the opinions of the author. This does not necessarily reflect the views of Electrical Review or its publisher – SJP Business Media ISSN 0013-4384 – All editorial contents © SJP Business Media
Follow us on Twitter @DCRmagazine
Join us on LinkedIn
4 www.datacentrereview.com Q2 2021
AS IF DEMAND FOR DATA CENTRES IN EUROPE WASN’T HIGH ENOUGH, IT CONTINUES TO CLIMB
The latest highlights from all corners of the tech industry.
Techie looking for work? The skills currently on the most wanted list
K tech job openings declined in 2020, but the demand for certain skills is on the rise, most likely spearheaded by the pandemic. The number of technology job listings in the UK declined by 57% during the past year, with fewer than 55,000 open roles advertised, according to the latest UK Tech Talent Tracker from Accenture. Despite this, demand for skills in cutting-edge technologies such as cloud, artificial intelligence (AI) and robotics saw a resurgence in many cities across the country. The tracker, which analysed LinkedIn’s Professional Network data, found that the overall decline was driven by a reduced number of job listings for data analytics and cybersecurity professionals, which fell 53% and 54%, respectively. With nearly 35,000 roles advertised, cloud computing was the most in-demand technology skill in the UK over the past year. In addition, job postings for AI skills have seen a resurgence, jumping 73% in six months, to approximately 6,800. Robotics roles are also up by almost two-thirds, to more than 3,000, with demand for blockchain and quantum computing skills jumping 50% and 46%, respectively.
6 www.datacentrereview.com Q2 2021
EcoAct, Interxion, Schneider Electric and Calanques National Park announce seagrass conservation program to help combat climate change EcoAct, an Atos group company specialising in climate and active player on the ground, in partnership with Interxion, a Digital Realty company; Schneider Electric IT France; and Calanques National Park, have launched the ‘Prométhée – Med’ research project, to help establish the first methodology for the certification of seagrass conservation and preservation measures.
Despite already huge demand for data centres in major European cities, the market is not looking to slow down any time soon, and is predicted to rise by a third this year. JLL Data Centres has forecasted a 21% increase in new data centre capacity, with 438 MW to be added to established markets this year alone. JLL’s report has revealed that increased cloud migration and technology adoption drove unprecedented demand and activity in the data centre industry in 2020. Take-up in Europe’s main data centre markets of Frankfurt, London, Amsterdam, Paris and Dublin (FLAP-D) increased 22% YoY, reaching 201.2 MW (megawatts), with this pace of growth expected to continue for the rest of 2021. Despite a slow start to 2020, enterprise colocation demand picked up in the second half of the year across Europe. Take-up in London increased 72% with 87 MW seen throughout the year. Frankfurt also had a record year for absorption with 69 MW of deals and 124 MW of headline signings. Significant growth in new supply placed Frankfurt as the largest mainland colocation provider in Europe.
KAO DATA TEAMS UP WITH CAMBRIDGE SCIENCE CENTRE TO HELP ENCOURAGE MORE YOUNG PEOPLE INTO STEM
Kao Data has announced a three-year funding agreement with the Cambridge Science Centre, as well as becoming a member of its Executive Council. No need to brag guys. The project aims to help bridge the digital divide in East Anglia and beyond, providing access to technology and resources that will encourage more seven-to-11-year olds into science, technology, engineering and mathematics (STEM).
How carbon-free is your cloud? New Google data lets you know oogle has a new sustainability goal: running its business on carbon-free energy 24/7, everywhere, by 2030. Now, the tech giant is sharing data about how it is performing against that objective, so that customers can ultimately select Google Cloud regions based on the carbon-free energy supplying them. Completely decarbonising Google’s data centre electricity supply is the next critical step in realising a carbon-free future and supporting
Bordeaux is about to get its very first Equinix data centre Equinix has announced its intentions to open its first data centre in Bordeaux, France, in Q3 2021. BX1 will be the first carrier-neutral data centre in the New Aquitaine region, built to meet the needs of edge computing and growing digital ecosystems. With direct fibre links to Equinix’s International Business Exchange (IBX) sites in Paris, this new facility – dubbed BX1 – will provide global businesses and local authorities located in New Aquitaine with the ability to connect directly and securely to the world’s digital economy, via comprehensive digital ecosystems.
8 www.datacentrereview.com Q2 2021
Google Cloud customers with the cleanest cloud in the industry. On the way to achieving this goal, each Google Cloud region will be supplied by a mix of more and more carbon-free energy and less and less fossil-based energy. Google measures its progress along this path with its Carbon Free Energy Percentage (CFE%). Now, Google is sharing the average hourly CFE% for the majority of its Google Cloud regions on GitHub and cloud.google.com.
Durham’s public Wi-Fi roll out connects council with Covid recovery plan
A £1.3m framework between Durham County Council and technology integrator North is set to help the county with its Covid-19 recovery plan, while residents and businesses will benefit from free town centre Wi-Fi. The installation will allow the council to understand visitor trends, including new and repeat visitors, the length of time spent in specific areas and the routes taken through each, day and night-time traffic for economy reporting, and monitoring the effect of high street changes regarding planning and regeneration schemes. Due to the challenges faced by Covid-19, the project objectives have also been expanded to support the council’s recovery plan.
Modular UPS – Reimagined Socomec has democratised big data centre technology to redefine the modular UPS.
oday’s critical infrastructures need to be flexible in terms of both physical products and extended support services – with the ability to accommodate rapid deployment requirements or system upgrades, all whilst maintaining the system’s maximum availability. Some of the latest modular UPS systems can now solve a number of problems in parallel – but this hasn’t always been the case. Designers, installers, maintenance staff and operators have developed preconceived ideas when it comes to modular technology – rooted in sub-optimum experiences associated with legacy equipment. In the past, modular did not always mean real modularity – but integrated power specialist, Socomec has embedded feedback and experience from customers and end users in its multi-disciplinary development process to bust the myths surrounding modular, and to reimagine next level electrical infrastructure and operational performance.
Not all modular UPS systems are created equal By understanding our customers’ specific needs and using a collaborative ‘test and learn’ approach – involving customers early on in the development process – Socomec has addressed the market’s greatest concerns when it comes to modularity, whilst also reducing the complexity of the technology to provide absolute confidence in the new release. Colin Dean, UK managing director at Socomec, explains, “By posing a series of practical questions, it’s possible to drill down into what really matters when it comes to modular added value performance. For example, can you carry out a module hot swap in just five minutes? Are you certain that your UPS has no single point of failure? Can you perform truly risk-free online maintenance? Are you sure that your system won’t propagate faults? If the answer to any of those questions is no, then it’s time to re-think your modular system.” Socomec has driven a stream of innovation to guarantee the performance of the new
10 www.datacentrereview.com Q2 2021
True modularity means that a module can be added or removed while the load is fully protected and with zero risk of human error
Colin continues, “By removing most of the risk and uncertainty often associated with new developments, and starting with the intrinsic value of products within our current range – combined with insight and expertise from the market – it has been possible to deliver something exceptional. “Because we have included our customers in the development process – at every step along the way – we have been able to take all the knowledge of our big data centre technology and democratise that technology – making it accessible and relevant for every application. “Furthermore, no matter which of our systems is right for a particular application, we have made some key features available across the entire Ultimate Modular UPS range – because we felt they were too good not to share.”
electrical ecosystem by developing a disruptive range of UPS solutions that make the latest advances in technology more accessible – and easier to deploy – than ever before. New technology – De-risked With more than 20 years’ of experience in developing and supplying modular solutions, Socomec’s Modulys solutions provide the ultimate availability, scalability and extended lifetime to critical applications in IT infrastructures. Based on proven technology – with several thousand modular systems in the field – the range is available from 2.5 kW to 4800 kVA/ kW and has been described as the gold standard in terms of power scalability and risk-free maintenance in a truly online modular format.
Safe and easy deployment No matter whether an installation is focused and compact or seriously super-scaled, installations are guaranteed to be safe and uncomplicated – for everyone. The unique flexibility of the modular system de-risks late-stage specification changes or last-minute on-site adaptations, and standardised bricks mean that customisation is quick and easy. Pre-engineered asset installation means that cabling challenges are a non-issue, and the easy configuration display delivers exactly what it promises. Combined with guaranteed future hardware and firmware compatibility, installation has been carefully designed to be as easy as it gets. The five-minute, zero risk hot-scale True modularity means that a module can be added or removed while the load is fully
protected and with zero risk of human error. The scalability is fast and foolproof – with no engineering skills required, no specific software tools or complex procedures. The simple plug-in process means that there are no requirements to place ‘hands inside’ – nor are there any complicated cabling reconfigurations to grapple with, therefore eliminating potential mistakes or hazards. Simplicity itself, additional modules self-test and auto-configure and the hassle-free automatic connection/disconnection ensures operator safety at all times. Risk-free maintenance Modular solutions deliver much faster risk-free maintenance at critical and sensitive stages – all while protecting the load in online mode during operation, maintenance and upgrade. Self-diagnostics provide immediate fault detection – and isolation, when required. Local spare modules make for an easy-swap and low MTTR – without intervention from specialist engineers. Furthermore, full module extraction means that maintenance can take place outside the critical system. Superior resilience By rightsizing through modularity and robust design, system reliability can be maximised for the best possible overall resilience at the entire modular UPS level. Designed and engineered with no single point of failure – and zero fault propagation – the power module delivers a certified 1,000,000 MTBF, with appropriate
granularity between intrinsic redundancy and MTBF impact at system level giving the required superior resilience. Colin Dean continues, “Advanced mechanical and electrical segregation design eliminates fault propagation; there is no single point of failure thanks to distributed control and peer to peer information sharing. “When thinking about the brain that powers the system, it’s important to remember that with a unique system control, a single point of failure is inevitable – which is why it’s so important to have shared control between different modules. “The Modulys range has been designed with distributed intelligence so that all modules operate intelligently on a peer-to-peer basis ensuring load sharing, synchronisation and selective tripping capabilities together with the coordination of static bypass control. What that means is that if one module is lost, the
others are still able to exchange information with each other and to run the system whilst maintaining power availability.” Modular – Reimagined for the future Socomec’s Modulys range has been developed to deliver the highest quality power – via the latest technologies – that is simple to deploy, whether for greenfield or priority upgrade projects. The flexibility of this next generation modular architecture enables users to adapt – rapidly – to ever changing requirements and the hardware and firmware have been designed to provide a lasting solution with guaranteed future compatibility – across the entire system. Contact Socomec: www.socomec.com firstname.lastname@example.org | 0044 128 586 3300
Modular solutions deliver much faster risk-free maintenance at critical and sensitive stages – all while protecting the load in online mode during operation, maintenance and upgrade
Q2 2021 www.datacentrereview.com 11
Sustainability: Have you got the edge?
The imperative for sustainability will also play out differently at the edge. It will need to encompass new models for efficiency, a holistic look at the carbon impact of IT infrastructure, and both consumed and avoided emissions
12 www.datacentrereview.com Q2 2021
With green credentials now top of many customer’s agendas, as well as the need for data to be processed closer to the source, Ali Fenn, president at ITRenew, explores ways to optimise sustainability in edge data centres in order to satisfy these growing demands.
dge infrastructure brings crucial new compute, storage and networking capabilities to businesses and communities across the globe. It also brings new challenges. Distributed data centres come in all shapes, sizes and form factors, developed to address varied business needs, workload priorities, footprints, environmental conditions and regional constraints (both geographic and regulatory). This means operational efficiencies, common in purpose-built core data centres, are not available at the edge. And it means more non-traditional data centre operators will be deploying solutions in the market. The imperative for sustainability will also play out differently at the edge. It will need to encompass new models for efficiency, a holistic look at the carbon impact of IT infrastructure, and both consumed and avoided emissions. Circular economic models, with their financial and environmental benefits, will continue to help guide the data centre industry to meet the crisis and leverage the opportunities at the edge. Rapidly expanding 5G and IoT services and applications are having a significant impact on every aspect of our lives. Not coincidentally, they are changing how data centre operators and a plethora of new infrastructure deployment constituents deploy their capacity. Companies are building facilities at the edge much closer to their operations and the people they serve, this helps meet bandwidth and latency requirements. They are also leveraging a multitude of existing spaces from public to private to outdoor. These proximal edge deployments provide local processing and storage, but will inevitably look and feel quite different from traditional core data centres. By operating close to where the world lives and works, edge data centres and compute infrastructure are often located in structures that are not purpose-built for housing IT hardware. New formats are emerging, such as modular data centres in shipping containers and micro data centres integrated into cell towers. Some are rack-based, others are small form factors, sometimes further integrated into ruggedised chassis. With more diverse locations and formats than the core, edge data centres will require their own strategies for sustainability. New opportunities for reducing total carbon impact come into play based on the facilities’ local regions and cultures. Lower PUE and renewable energy are the first step A decade ago, data centre sustainability was foremost a matter of reducing electricity usage. Led by the world’s largest cloud services companies, also known as hyperscalers, operators developed highly efficient technologies such as DC power distribution and cooling strategies to drive down power usage effectiveness (PUE) and thereby minimise associated Scope 1 and 2 footprints. However, such special-
ised techniques as these are difficult to replicate across a large number of smaller, distributed edge data centres. Therefore, PUE is not seen as the driving criteria for achieving sustainability at the edge. Having driven down PUE, the data centre industry moved to become more carbon neutral by buying renewable energy credits (RECs) and employing other financial mechanisms to offset their Scope 1 and 2 emissions. In the last few years, many have actually committed to running their facilities on renewable power. Edge data centres can and should use renewable power wherever possible, but contracting green energy for widely distributed facilities is significantly more complex than doing so for one large core data centre. The industry is committing to these measures through efforts such as the Climate Neutral Data Centre Pact, an industry association of European cloud and data centre operators including AWS, Google and Equinix, as well as smaller and national providers, and backed by 17 industry bodies. These operators are dedicated to becoming climate neutral by 2030 by various means, such as a target PUE of 1.3 in cool climates and by matching electricity demand with 100% renewable energy. As they diversify their operations to include more edge data centres, their practices will help to make edge data centres more efficient as well.
By operating close to where the world lives and works, edge data centres and compute infrastructure are often located in structures that are not purpose-built for housing IT hardware Moving past Scope 2 to Scope 3 emissions More importantly, the members of the pact are committed to innovating their operations for a circular economy model. To that end, they will assess 100% of their used server equipment for reuse, repair or recycling by 2030. This is the next frontier in sustainability. The sourcing, manufacturing and transportation of IT equipment is a significantly greater contributor to net carbon emissions (as much as 75% of embodied carbon) than the ongoing data centre operation. Reuse of existing data centre hardware has a direct and substantial impact on their sustainability. For example, extending the life span of IT equipment from the typical three-years in a hyperscale facility to a nine-year life through downstream reuse can result in a net CO2e saving of 24%. Edge data centres are a great market for this hardware, where performance per dollar is the critical dimension of hardware selection. Moreover, hyperscalers get the newest technology from vendors much earlier than the broader market, so their ‘end of life’ technology may have been available on the open market for as few as 18 months.
Q2 2021 www.datacentrereview.com 13
Open hardware standards, such as from the Open Compute Project, now make it possible for solution providers to tailor and certify the decommissioned hyperscale IT hardware for use in other data centres. This ‘second life’ places IT hardware into the circular economy. By definition, open hardware has the advantages of also letting the solution provider install and update firmware, reconfigure racks, certify the gear for specific software, integrate and test it all, and provide full warranty and support services, rendering the second-life equipment performance-equivalent to new, proprietary equipment.
By lowering hardware costs, these edge architecture innovations make it easier for data centre operators to expand their operations to wherever they are needed. IT hardware commonly represents about 75% of the TCO of running traditional data centres. Using recertified hyperscale technology, the data centre can provide a 50% TCO advantage, and in some edge environments with smaller unique capacities, operators could save even more.
A path to carbon neutral and carbon negative As data centre operators adopt a circular economy model for their edge facilities, they are in a good position to pursue carbon neutral goals – and even strive to go carbon negative. For example, some innovative firms are implementing systems that generate new value from the heat generated by IT equipment. Typically, waste heat is just that, waste. But in fact, heat can be leveraged to do productive work and create and capture value from downstream usage. Edge data centres are located near other sectors of the economy that can use that heat productively. This opens up the emerging category of Scope 4 emissions – emissions that can be avoided by the operation of an edge data centre.
With more diverse locations and formats than the core, edge data centres will require their own strategies for sustainability
Circular economy model improves edge TCO too The combination of green energy (where possible), a circular economy model and innovative interaction with businesses or residences, provides substantial sustainability opportunities for operators of edge compute and storage. Fortunately, the combination can drive business efficiencies as well. Circularity models maximise the lifetime of equipment, at the core and at the edge. Edge data centre operators can optimise the upstream supply chain and then continue with the aggressive lifecycle strategies, maximising IT equipment as an asset class as opposed to a waste stream. Leveraging the open hardware platform of OCP creates a backbone of open hardware and software and their cost efficiencies, which also simplify operations management.
The leading edge of data centre sustainability The combined scale of edge computing equipment is forecast to be as much as four times larger than core data centre compute. This is the new frontier of demands for sustainability of infrastructure, and it is our collective opportunity to get it right from the start, rather than replicate dated inefficiencies. We must absolutely demand and proliferate renewable energy as broadly as possible. But this alone is insufficient to curb the negative carbon tax and trajectory of the IT infrastructure we depend on globally. Recertification and reuse of high-performance, proven hyperscale technology not only catalyses this technical innovation to broader markets, but also stands to accelerate sustainability of the global data centre sector. We now have the tools and business models to deliver on this potential, and with their proximity to the very society that sustainability seeks to benefit, edge data centres may start to lead the way.
Q2 2021 www.datacentrereview.com 15
Edging away from downtime Marc Garner, VP, secure power division, Schneider Electric UK&I explores how to minimise downtime in industrial edge computing environments.
oday, increased levels of automation, advanced robotics, AI and machine learning are driving unprecedented change inside factory environments. With growing levels of complexity, these applications demand secure, on-site computing systems that offer the user high levels of security, ultra-fast connectivity and, above all, great resilience. With today’s short lead times, emphasis on fast deliveries and tight profit margins, keeping downtime to a minimum is a key concern for manufacturers. As such, smart manufacturing is driving a new wave of IT technologies into industrial spaces, which requires edge computing systems that ensure privacy and data security, while guaranteeing uptime and addressing bandwidth requirements that have become crucial to operations.
16 www.datacentrereview.com Q2 2021
Identifying the edge application For Industry 4.0, edge computing bridges the gap between cloud and on-premise infrastructure. The traditional drawback of the cloud has been high levels of latency or low response times, caused by the distance between the data centre supporting it and the location of the application. Edge computing offers the best of both worlds, placing physical infrastructure and business-critical IT closer to the point of use, enabling the user to combine the benefits of cloud computing with the ultra-fast response times required by on-site equipment. Applications that benefit from edge computing can, in general, be subdivided into three categories, each with their own specific designs and benefits. They include, IT facilities; commercial and regional
offices; and industrial or harsh environments. The latter often comprises ruggedised micro data centres deployed in indoor or outdoor locations, where ambient environmental conditions are difficult to control. Challenges include a wide range of temperature or humidity conditions, water hazards, the presence of dust or other contaminants, and the need to protect computer systems from collisions and vibrations, as well as the obvious need for physical security to guard against unauthorised access. Defining the industrial edge For industrial operators to capture the benefits of increased automation, they cannot rely on cloud-technology alone. According to McKinsey, Industry 4.0 is a term referring to the increased digitisation of the manufacturing sector, driven by, “the rise in data volumes, computational power and connectivity; analytics and business intelligence capabilities, new forms of human-machine interfaces [including] augmented reality systems; and improvements in advanced robotics and 3-D printing.” Industrial edge data centres are IT infrastructure systems containing integrated racks, power and cooling, and distributed across a number of geographical locations to enable endpoints on the network. When deployed within industrial manufacturing plants or distribution centres, the application is referred to as the ‘industrial edge’. Given the increasing importance of computing in factory and industrial automation systems, it is inevitable that greater numbers of edge computing systems will be installed in these harsh and industrialised environments. To achieve the shortest possible ROI and gain both the resilience and speed demanded by AI, robotics and other Industry 4.0 technologies, manufacturers must properly measure asset performance, rapidly identify any problem areas, and make any crucial changes in real-time that will drastically improve their operations. This is also where on-premise IT becomes critical and is where the majority of the data capture occurs. Industry 4.0 requires that computing systems are tightly integrated into the manufacturing process, but it also means that resilience and high availability become key design concerns for the accompanying edge infrastructure. Building a resilient industrial edge Downtime is the curse of any manufacturing operation and any integrated IT systems cannot afford to add to the risk of lost production. A 2016 study by Aberdeen Group, found that 82% of companies had experienced unplanned downtime in the previous three years, which could cost an average of $260,000 per hour. Industrial edge systems, therefore, must be built to the highest standards of availability, if necessary, to Tier III, which promises an uptime of 99.98% or an average of 1.58hr of downtime per year. Tier I level data centres, with 99.67% uptime, for example, can be down for 28.82hr per year. In the example above, such a difference in downtime could cost in upwards of $7 million per year. Clearly, an investment in improved uptime delivers clear benefits to the bottom line. Given the industrialised environments in which manufacturing operations take place, and the high level of potential contaminants, attention must be paid to the enclosures, which must remain robust to protect the IT from downtime. Space is likely to be at a premium too, so care must be taken to ensure that the system can be deployed in spaces that weren’t designed for IT.
Ruggedised IT enclosures can provide optimum performance and reliable operation in harsh environments. Some come in wall-mounted designs to make the best use of space, leaving the factory floor clear for manufacturing equipment. Careful consideration of the uninterruptible power supply (UPS) will safeguard against disruptions to power, while lithium-Ion (li-ion) batteries can also provide an energy efficient backup source, which frees up physical space because of their small size, while also offering a longer lifecycle. Li-ion UPS are able to operate over a broader temperature range and offer ease of monitoring thanks to intelligent sensors that help to reduce operating costs while increasing their reliability. Thereby improving the ability of an industrial plant to withstand power disruptions. Cooling is also essential for reliability in any IT environment, and in industrial spaces, self-contained air conditioners can be fitted to ruggedised enclosures to regulate internal temperature and humidity without incurring the risk of environmental contamination. Yet no matter how reliable the hardware equipment is, the key to minimising downtime is via real time monitoring and management, ensuring any faults can be proactively anticipated, repaired and downtime mitigated.
For Industry 4.0, edge computing bridges the gap between cloud and on-premise infrastructure Software drives uptime To ensure high levels of resilience, software and security are crucial. The latter can take many forms including physical security to protect against unauthorised access on-site, as well as next-generation software systems offering advanced protection from cyber attack. For many operators, the ability to leverage a platform that brings together disparate systems, including edge, building control and industrial process offers many benefits, including end-to-end visibility. At the edge, next-generation Data Centre Infrastructure Management (DCIM) software leverages AI, data analytics, cloud and secure mobile applications to monitor the IT systems in real-time. Should downtime occur, the user can quickly dispatch service personnel to respond to any issues. The beauty of such management software is that it can be used by service organisations to provide support where dedicated technical personnel aren’t located on site, thereby offering increased levels of resilience in smart manufacturing. Today the growth of IT in industrial automation is driving new innovation that allows manufacturers to introduce new products and services far faster and with greater reliability. This enables industrial organisations to execute their business strategies more successfully, drive productivity and deliver improved experiences to customers. Vendors, likewise, are innovating edge computing solutions and services to minimise the risk of downtime in industrial environments, and as smart manufacturing increases via highly automated and advanced robotic systems, there is undoubtedly a need for a resilient industrial edge.
Q2 2021 www.datacentrereview.com 17
Don’t risk the protection and security of your IT equipment As the world becomes more interconnected by technology, the demand for IT infrastructure is only set to rise. From personal interactions such as checking bank balances via mobile banking apps to managing the businesses ERP and MRP, all are only possible with the seamless integration of IT. What if these IT systems are not physically protected?
ure, downtime can be annoying in our personal lives, but for businesses, it can have much greater consequences. Often, it’s the IT managers role to mitigate the risk of downtime and ensure the IT systems continue to perform with no hiccups. Almost every department within a company is dependent on IT systems to some extent, meaning therefore, any IT failure can result in a hefty financial loss. For larger companies, this cost could be up to six figures an hour and therefore it’s easy to imagine the financial damage. So while energy and expense are usually attributed to cybersecurity, the physical security of the IT infrastructure must also be taken into careful
18 www.datacentrereview.com Q2 2021
consideration to avoid any downtime. Let’s take a look at four key areas: 1. Physical protection Often the IT rack is overlooked as an important part of the infrastructure. But a rack isn’t just a rack, it sits at the core with the fundamental purpose to protect the delicate components it houses. It must be robustly constructed whilst still having the ability to adapt and respond to any changes in its surroundings. Before purchasing a rack, consider the specific features the rack must have to continue working in the environment it is placed in. 2. Efficient cooling Cooling is one of the most important aspects of
IT infrastructure. It is a necessity to ensure the components are always kept at an acceptable temperature so the equipment can operate to its full potential. The required level of cooling is different for each situation as it is based on the heat loss of the installed active components and ambient temperatures within the location. A Rittal expert can advise you through the whole project to ensure you maximise your efficiency, payback and equipment protection. 3. Reliable & secure IT power supplies A reliable and constant electrical supply is critical for the operation of servers, processors and cooling of electronic components. Even the smallest power outages or disturbances can lead to downtime in processing and lost revenues. Therefore, solutions
to ensure the electrical security for the IT infrastructure should be carefully considered. 4. Real-time monitoring To always have peace of mind that the IT infrastructure is secure and still performing to its full potential, real-time monitoring through systems such as Rittal’s CMC (Computer Multi Control) system are highly recommended. Customised solutions control and monitor all physical parameters of the environment such as temperature, humidity, smoke and rack access. The data collected from the sensors can be reviewed online via a web interface, or be globally monitored via a DCIM, such as Rittal’s RiZone meaning you know exactly when and where a parameter has been compromised. Rittal’s new whitepaper Rittal has launched its new, ‘Safeguarding IT Infrastructure’ whitepaper, where the areas above are explored further, to help you reduce the risk of potential downtime whilst still meeting the IT world’s future demands. Check out the link to download the free ‘Safeguarding IT Infrastructure’ whitepaper to discover exactly how Rittal can assist in strengthening your IT infrastructure: info. rittal.co.uk/safeguarding-it-infrastructure Contact Rittal | www.rittal.co.uk Or via LinkedIn @rittal-ltd-uk
Q2 2021 www.datacentrereview.com 19
GREEN IT & SUSTAINABILITY
Why are data centres getting their cooling utilisation so wrong? When it comes to cooling your facility, could you be overspending? According to Anuraag Saxena, optimisation manager at EkkoSense, you probably are.
hen we recently surveyed some 133 data centre halls, our granular thermal analysis found that the average data centre cooling utilisation level was just 40%. This result was only slightly better than the 34% figure we revealed from our data centre cooling research back in 2017. With the renewed focus on initiatives to address the climate crisis, notably with the USA’s recent decision to rejoin the Paris climate accord, there’s increased pressure on organisations and their high energy users to make serious carbon reductions. It’s essential that IT operations do everything they can to deliver immediate carbon reductions to help
20 www.datacentrereview.com Q2 2021
organisations deliver on public net zero commitments. Given this agenda, why are so many organisations still so inefficient in their data centre cooling? With cooling utilisation at 40%, it’s clear that facilities are overspending on their cooling energy costs. We estimate that the industry could make cumulative savings of over £1.2 billion by optimising their data centre cooling performance. That’s a potential worldwide emissions reduction of over 3.3 million tonnes CO2 equivalent per annum – equivalent to the energy needed to power around one million UK homes for a year. We know operations teams do an amazing job at keeping their
GREEN IT & SUSTAINABILITY
facilities running, particularly given huge increases in compute demand, so why has the issue with poor cooling utilisation developed? From an optimisation perspective, there are three factors at play here: 1. An over-reliance on often outdated design specifications 2. A determination to deliver against data centre SLAs that aren’t necessarily applicable today 3. An absence of true granular visibility into data centre performance.
Inflexibility of many data centre SLAs Too many data centres are still locked into rigid SLAs around uptime and that means their focus and priorities remain heavily centred around risk avoidance. Moreover, a significant number of data centres govern these SLAs against only a few sensors, which again may be in an incorrect location which is not in line with the ASHRAE standard.
Too much adherence to historic design specs? Often the problem goes back to the initial data centre design specification – which could be anything up to 10 or 15 years old. Perhaps the original specification was for a maximum 350kW capacity, and that has always been the cooling capacity applied. However, these legacy decisions often aren’t always directly communicated to the facilities team – who are busy dealing with the data centre every day. And of course, things always change – compute loads, data centre management teams, facilities engineers all evolve – with the result that the gap between the original design and today’s reality can quickly expand. For example, we’ve seen sites that were cooling for an original design capacity but running at just a quarter of that load. And while this is an obvious example of over-cooling, nobody ever really saw this as a problem – as their data centre (operating at a significantly low average IT rack inlet temperature) wasn’t ever going to breach critical thermal SLAs.
With cooling utilisation at 40%, it’s clear that facilities are overspending on their cooling energy costs. We estimate that the industry could make cumulative savings of over £1.2 billion by optimising their data centre cooling performance Until now this has removed any real incentive to optimise data centre cooling energy consumption. While uptime is obviously the prime driver for facilities teams, SLAs that were defined at the data centre design stage are becoming increasingly less relevant as time moves on. For example, today’s data centre infrastructure is much more efficient and can run at higher temperatures than equipment that was specified five or 10 years ago. At the same time, many data centres run on much tighter margins now than those enjoyed by previous operations teams. Can today’s facilities teams really afford to keep on adding expensive cooling hardware, especially if they’re going to have to keep on paying for it over the next five plus years? Lack of granular insight into real-time data centre performance However perhaps the biggest barrier to effective data centre cooling utilisation is poor visibility – actually being able to see what’s going on across your site/estate in real-time. Unfortunately, the reality for many operators is that they don’t have access to the tools that can help them to make smart data centre performance choices. For example, it may be that you could run your cooling system more efficiently with a different control setting – but how would you know this? That’s why it’s important for data centre operations teams to be able to gather and visualise thermal, power/energy and space data at a much more granular level – ideally down to individual IT racks and data hall CRAC/CRAH units at the minimum. We’ve already seen how adopting this kind of software optimisation approach can help organisations to reduce their cooling energy usage by 30% – while still ensuring that their risk is reduced by optimising sites to 100% rack-level ASHRAE-recommended thermal compliance.
Q2 2021 www.datacentrereview.com 21
GREEN IT & SUSTAINABILITY
UPS: A catalyst for sustainability Arturo di Filippi, global offering manager for large power systems at Vertiv, explores how your UPS may have more green potential than you think.
hilst the pandemic placed sustainability on the back seat in 2020, there’s no doubt it is resurfacing as a top business priority this year. The Climate Neutral Data Centre Pact, signed by major European cloud and data centre operators, is the latest effort to better utilise energy as data centres and energy companies look to become more efficient. Yet the transition to a more sustainable future requires a big change in infrastructure, as mixing old and new energy sources in a legacy grid is challenging. That’s why businesses are turning to an Uninterrupted Power Supply (UPS) to better manage energy demands and help sustainability efforts. A business stabiliser The UPS acts as a piece of business insurance. It provides clean back-up power to IT and critical infrastructure networks, so, in the case of an outage, these systems offer power to safely shutdown workstations and allow back-up generators to kick-in. They can also provide up to several hours of power for certain types of equipment. Ultimately, a UPS can mean the difference between business as usual or the loss of data and hours of productivity. Figures from the Uptime Institute highlight the huge cost implications of a power failure, with 48% of outages now costing firms between $100,000 and $1 million (compared to just 28% in 2019). The investment in a UPS is therefore a small price to pay for a guaranteed power lifeline. More than a back-up The UPS will always play a role in mitigating the risk of a major power failure. But as the UK’s national grid network becomes more reliable,
22 www.datacentrereview.com Q2 2021
with more than £378m worth of contracts awarded to ensure consistent power, the UPS can do more than support potential outages. UPS batteries can store energy; so, if the power is not needed at that moment in time, it can be held until there is an increase in demand. The UPS, therefore, can play a crucial role in sustainable energy usage. Furthermore, companies can also come off the grid at peak times and use UPS devices to become more self-sufficient. The potential benefits from energy storage services can be huge; a 10MW facility, for example, could expect to generate revenue of more than €1 million per annum. Renewable support In addition to peak shaving and demand management supporting the energy grid, UPS batteries can also help in the shift to renewable energy. This is because, when natural energy sources such as solar and wind cannot meet demand, utilities often fall back on carbon-based sources. But the UPS can help maintain a consistent and reliable stream of power. For example, in Ireland, SSE Renewables and Irish-owned Echelon have agreed to develop a joint 520MW offshore wind farm and meet the power needs of Echelon’s data centres. These data centres will be able to use a UPS to store excess energy for periods when there is not enough wind energy to meet demand. Using the UPS in this way builds assurance amongst businesses that renewables can be used without risking power interruption. This approach could be replicated in the UK, enabling data centres to contribute to the UK’s efforts in embracing green sources. This will be important moving forward as the National Infrastructure Commission (NIC) has urged the UK to increase its renewable electricity target from 50% to 65% by 2030. Unlock UPS potential Sustainability is here to stay and industries must take control of their contribution to greener practices. Data centre managers are in a prime position to initiate better energy efficiency by using UPS batteries proactively, not just reactively. This approach best supports both the grid and the shift to renewables. It’s time to unlock the full potential of the UPS and recognise it as more than a backup, but a critical component to a sustainable future.
Explore Data Centre Review The leading publication and website focusing on the data centre industry DCR provides data centre, energy, facilities and building service managers and directors with expert information to enable them to keep data centre sites running effectively while ensuring availability. The print publication is produced quarterly alongside its sister title Electrical Review and covers: • UPS & Standby Power • Cooling • Colocation & Outsourcing • Virtualisation & Cloud Computing • DCIM • Security • Edge Technology • Storage, Servers & Hardware • AI & Automation • Green IT & Sustainability • Data Centre Design & Build • Big Data & IoT
Find out more at:
If you have any content you would like to share with our editorial team, please contact Claire Fletcher, Editor at email@example.com
For commercial and advertising enquires, please contact: Sunny Nehru Group Account Director +44 (0)207 933 8974 firstname.lastname@example.org
Kelly Baker Account Manager +44 (0)20 7933 8970 email@example.com
COLOCATION & OUTSOURCING
Standing out from the crowd With the colocation space becoming ever more saturated, David Fiore, senior software product manager at Opengear, discusses how colo providers can differentiate themselves in an already crowded marketplace, and explores how to find a value-added model that works for you.
recent study conducted by Grand View Research, Inc. found that the size of the global data centre colocation market is expected to reach $104.77 billion by 2027, expanding at a compound annual growth rate (CAGR) of 12.9% over the period 2020-2027. Over the coming years, the success achieved by colocation service providers will be key to this ongoing growth. These providers are today increasingly looking to add value to their service offering to potential tenants, to differentiate themselves from their competitors and create an edge in this space. Delivering units and rack space at a competitive price will not be sufficient in itself. Providers will be looking to add new strings to their bow. One of the key enhancements they can deliver is their ability to effi-
24 www.datacentrereview.com Q2 2021
ciently monitor and manage the systems they are looking after on behalf of their tenants. In doing this, colocation providers will above all be looking for a single pane of control to enable them to remote access and provision, troubleshoot and reconfigure. This can all be easily scaled for their customer – delivering even greater value. Underneath this, they will need to be working with a fully functional network management capability. Providing customers with tools for out of band management (OOB), which allow them direct access into their network and to manage devices like switches, routers or firewalls, without having to purchase and maintain the equipment will be a key part of this. Delivering an always-on independent management plane, OOB gives users reliable access to monitor and manage their IT infrastructure. That capability can be combined with NetOps automation tools which allow engineers to automate and orchestrate key functionality and maintain business continuity. This approach is ideal for organisations renting space within a colocation facility. It also benefits facility providers who can use it to offer a remote hands service to their tenants, applying relevant access controls and permissions and then segmenting out capability to them from an out-of-band device. Having the ability as an administrator to simply get onto a laptop and access the services that sit inside the colocation facility, without the need to do any complex networking, will also be a key benefit for colocation
COLOCATION & OUTSOURCING
providers, and will help them deliver reassurance and added value for their tenants. Further opportunities will be opened up by the emerging capability to segment customer data and traffic and even permission levels across these tools. Yet, all this relies on having implemented a centralised and streamlined network monitoring and management infrastructure. Colocation service providers won’t want to have to bring up one portal to access their servers, one to manage their network equipment and yet another to access security authentication. They will want to have a single place where they can control all this functionality.
Delivering units and rack space at a competitive price will not be sufficient in itself. Providers will be looking to add new strings to their bow Added to this, they are likely to want to be able to readily see all of their power or data consumption in a centrally accessible location rather than having to spend time and effort digging into processes in order to attain key metrics or understand key trends. All this benefits the service provider in making their operations more efficient and that by extension will bring complementary benefits to their tenants. Yet, all this capability will not necessarily be easy to achieve. Any provider will be faced with complex decisions about the solutions that they should buy. There are a diverse range of different networking products out there in the marketplace, each with their own consoles and configurations. When you pick a single provider there may be concessions to make in terms of areas which that provider does not cover, or where their solutions are more expensive than a best of breed solution elsewhere.
That’s a challenge for a colocation service provider. They will either have to pick a networking platform that doesn’t do everything and stick with it in the hope that it will do the majority of what they need, or they will have to use multiple platforms – something which comes with its own challenges in having multiple different management consoles and multiple different ways to facilitate changes. Scoping the benefits If the latter is the case, they will certainly benefit significantly from having an overarching solution that offers them the capability to manage all these networking devices from one single pane of control, which can link to networking devices and provide all the data they need in one place, rather than having multiple endpoints to go and get data and make changes. They no longer have to remember, for example, where do I go to control my firewall, or where do I go to put updated policies into my intruder prevention system? Delivering these kinds of capabilities will help colocation service providers stand out from the crowd but will also give a boost to the whole sector. Finding a way forward Today, as we have already seen, the colocation market is showing steady and robust growth. In some parts of the world larger providers are taking over and buying up smaller providers because they are confident they can build systems that allow them to deliver services across the board. In other regions, smaller colocation facilities have come to life to fill the void that the big ones don’t cover. Often these smaller players can offer greater granularity or flexibility than their larger peers, and are also better able to customise their offerings to fit the needs of tenants. Both large and small service providers have the potential to flourish in the colocation market of the future. Yet, despite their many differences in approach, they do have one thing in common. Both will be reliant for their success on the quality of networking systems and solutions they are able to access and their ability to make use of a centralised management system to drive faster time to insight and operational efficiencies.
Q2 2021 www.datacentrereview.com 25
COLOCATION & OUTSOURCING
Colo trends worth watching Abhijit Sunil, analyst at Forrester, explores some of the most significant trends he has observed through his research. These trends highlight the most important messages that customers will hear from the major colocation players throughout 2021. They will also be the major differentiators that leading colocation players will promote.
he data centre market is rife with not just intense competition but also innovation. Indeed, data is the backbone of every industry today, with data centres making the data accessible. This is why there is increased scrutiny on how data centres operate and the overall impact they have on the environment. Shaped by evolving consumer behaviour, enterprise needs and the Covid-19 pandemic, the data centre and colocation market is facing unique challenges and opportunities in 2021.
26 www.datacentrereview.com Q2 2021
Data gravity heavily impacts decision making Data gravity has introduced a chicken-or-egg problem in data centre market expansion. The concept states that, as data grows at a specific location, it is inevitable that additional services and applications will be attracted to the data due to latency and throughput requirements. This, in effect, grows the mass of data at the original location. Data gravity is more important than ever because (a) we’re creating unimag-
COLOCATION & OUTSOURCING
inable amounts of new data and (b) with the Internet of Things (IoT), artificial intelligence (AI) and machine learning (ML), as well as the advent of edge computing, we keep inventing new ways to produce and consume piles of data. With the surge of the IoT and edge computing, more data will be produced in a decentralised manner. The World Economic Forum estimates that 463 exabytes of data will be created each day across the globe. Data gravity is central to most decisions made by IT leaders when it comes to the geographic alignment of a data centre. This is also critical to how colocation vendors define markets.
With the surge on the IoT and edge computing, more data will be produced in a decentralised manner Sustainability in the data centre Scrutiny of the environmental impact of data centres is concurrent with their growth in the world’s major economic hubs. The 2020 Covid-19 pandemic boosted remote working models, surging global internet traffic by 40% between February 1 and mid-April 2020. Data centre players have told Forrester that they are increasingly having to address the environmental impact a data centre will have around its immediate community right from the stage of planning and building the data centre. At the same time, data centres themselves are becoming highly efficient. This efficiency offsets increasing energy demands. Colocation providers are among the top users of green energy and have been entering into long term contracts with utility companies. This leads to price certainty and cost benefits that they can pass on to customers. According to US Environmental Protection Agency data from 2019, colocation providers are partnering with green energy producers, as well as developing their own green energy-generation infrastructure.
The World Economic Forum estimates that 463 exabytes of data will be created each day across the globe Specialised workloads and services are increasing Data analytics is on the rise in every industry, leading to unique workload environment demands by IT customers. The financial dynamics wrought by the Covid-19 pandemic don’t appear to have affected high-performance computing (HPC) services on the public cloud.
HPC use cases are on the rise and include AI workloads and petrochemical applications, such as seismic processing, depth imaging, financial analysis, and healthcare data mining. Services to support special needs are also rising. According to Forrester analytics infrastructure surveys, in 2019 and 2020, 32% of global infrastructure decision-makers whose firms have implemented public cloud said they run HPC in their public cloud platform or plan to do so. Running HPC on-premises often requires high upfront CAPEX and heavy OPEX requirements to cater to the power requirements. More colocation players offer the infrastructure and services necessary to run specialised workloads, including support for AI-based deep learning applications, and thus offset the upfront hardware and talent investments needed. Data centre interconnection is central to colocation benefits and a key strategy Data centre interconnection – reliable, redundant network connectivity between sites – is among the primary advantages of the carrier neutral colocation model. All the major colocation players have robust interconnection roadmaps. Leading colocation players started their interconnection strategies as neutral locations to exchange network traffic – being the connectivity option between Tier I communication service providers. This model remains prominent, but the colocation providers themselves now also compete with their telco partners.
Colocation providers are among the top users of green energy and have been entering into long term contracts with utility companies Massive fibre buildouts got an impetus by growing remote work applications in Tier II and rural markets. This can also add to enterprise customers’ edge and cloud transformation strategies. For a variety of customers in a colocation facility, interconnection can address different strategic needs. M&A activities are consolidating the market The data centre industry has been moving towards larger facilities that optimise scale, sustainability goals, price guarantees, larger digital ecosystems, and most importantly, interconnection. Vendor consolidation yields this scale. Mergers and acquisitions have been a constant in the colocation market, where fortune has favoured the brave. While a few of the larger players have followed an organic growth approach, many others have actively pursued acquiring data centres, especially to boost their edge capabilities.
Q2 2021 www.datacentrereview.com 27
COLOCATION & OUTSOURCING
Get the facts! Ken Marshall, content manager at Lightyear, debunks the common misconceptions and misunderstandings that surround colocation.
here are several common misconceptions and misunderstandings when it comes to colocation. Perceived barriers of entry include everything from the fact that colocation is expensive, inflexible, or simply not as secure as on-site hosting. In truth, colocation data centres often have much better connectivity, infrastructure and security than many small and medium-sized enterprises (SMEs) can afford when using their own on-site hosting. This article will debunk many of the myths and misconceptions surrounding colocation and reveal why you should consider it for your business. Colocation is the same as traditional web hosting Colocation is not the same as traditional web hosting. Traditional web hosting involves sending your files to another server owned or managed
28 www.datacentrereview.com Q2 2021
by your web host. Depending on whether you have a private or shared server, you may be sharing that space (and bandwidth) with other clients of the web host. With colocation, you physically own the servers that are colocated in another data centre. Although the server physically resides in another location, it is still yours, and you don’t compete with others for server resources or bandwidth. The customer has little or no control over infrastructure All you rent at a colocation centre is space and connectivity; the servers and the software running on them belong to you, and your staff can perform most maintenance and configuration tasks remotely. Additionally, there is fierce competition among colocation providers, so many
COLOCATION & OUTSOURCING
data centres will accommodate additional business needs. If you find one that isn’t flexible, shop around until you find a more flexible provider that better suits your needs. There are no availability guarantees in a colocation arrangement Colocation centres often have staff on-site 24/7 to deal with any issues that arise. They also have multiple failover procedures in the event of a power failure, from battery backups to on-site diesel generators, and even direct connections to local utility companies. Colocation centres also typically run at around 80% capacity or less, so they have room to be flexible in the event of spikes in traffic. Colocation extracts a performance penalty because the equipment is offsite A greater physical distance between client and server means data has to travel further, and therefore it can take a longer time to transfer. However, this performance penalty is often overstated, as colocation providers often implement solutions that mitigate against this. Colocation data centres are usually a lot more efficient than traditional on-site servers. Many of these data centres will have direct connections to local utility companies, and some also use what’s known as ‘dark fibre’. Traditionally, fibre optic connections are shared among many different companies and customers. Comparatively, the data centre owner uses an individual dark fibre connection, so there is no competition for resources. This allows colocation centres to offer impressive performance despite the physical distance. Colocation is expensive Compared to traditional web hosting, colocation is more expensive. The problem with this comparison, however, is that the two services are fundamentally different. With colocation, you essentially get all the advantages of having your own data centre — from the superior internet connectivity to 24/7 on-site management. A better comparison to make would be the cost of staffing and maintaining your own data centre. Naturally, compared to this, colocation is typically much cheaper. Colocation is inconvenient Not having physical access to your servers might be considered an incon-
venience to some, but a lot of colocation providers offer managed service contracts. This support can include physical security, network security, disaster recovery, support with cooling and power issues, and more. Some providers can even help fine-tune and upgrade your server hardware and software. The amount of support you get depends on how much you want — whether you prefer a fully-managed service or not. Colocation limits scalability Colocation data centres are incredibly flexible. Since they handle many different clients, they have the resources to scale to meet virtually any demand upon request. These resources can be raised or reduced at any time, often much more quickly than would be possible with traditional on-site hosting.
Compared to traditional web hosting, colocation is more expensive. The problem with this comparison, however, is that the two services are fundamentally different Colocation is less secure than on-site hosting Security is a priority for many colocation data centres, and many include several layers of physical and network security. Equipment for different clients is physically separated and locked behind cages, in addition to 24/7 surveillance, on-site security guards, biometric scanners and more. Many colocation providers are also happy to offer tours to their clients if you’d like to verify security procedures that are in place. Few SMEs can afford this level of protection with onsite hosting. Uptime could be an issue Colocation data centres are typically staffed 24/7 and include several failovers and backup devices for power. There is a lot of redundancy built into the services provided by colocation data centres, making them considerably more reliable than other hosting solutions. Some data centres even offer 100% uptime guarantees. This level of reliability is considered a competitive advantage by many colocation services. Cloud will eliminate the need for colocation in the future Cloud hosting offers a lot of similar benefits to colocation, but the big difference is vendor lock-in. Creating a genuinely platform-agnostic service in the cloud is extremely difficult. Many large players such as Amazon Web Services, Google Cloud and Microsoft Azure offer similar functionality, but with slightly different ways of doing things. Businesses with large-scale services that are fully integrated into one particular cloud platform know how difficult it can be to migrate to another. Since you own the servers you use when using a colocation provider, you have complete control over how your product and data is structured.
Q2 2021 www.datacentrereview.com 29
STORAGE, SERVERS & HARDWARE
The hidden costs of storage management Quantum’s Eric Bassier, explores the hidden costs of storage management and how modern storage solutions can add business value through data insights and automation.
Calculating the true cost of storage management isn’t as straightforward as one might think
30 www.datacentrereview.com Q2 2021
he data landscape has evolved considerably in the last decade, leading to more data creation, retention and analysis to gain insights and make data-driven decisions. A challenge facing many organisations is not only how to best manage, store, protect and share data, but also how to calculate the true cost of storage management. Current data growth trends have amplified the challenges organisations face in storing and managing large amounts of data. In particular, the growth of unstructured data – from video, audio and image files to geospatial, genomic and sensor data – continues to rise with estimations that it will represent 80% of all the data on the planet by 2025. The World Economic Forum estimated that over 44ZB of data was collected in 2020, while IDC predicts a five-year compound annual growth rate (CAGR) of 26% by 2024. In this data-driven world, making data work for the business requires a new level of insight and automation. Huge unstructured data growth has exacerbated the challenges organisations already face in storing large volumes of data using pre-cloud, legacy architectures. To help alleviate this burden, organisations are seeking solutions that can provide insights into their ever-growing data. However, calculating the true cost of storage management isn’t as straightforward as one might think. One problem is that traditional storage TCO calculators don’t include the cost of managing data over its lifecycle. Everyone from the CIO to the storage admin wants a solution that doesn’t just store data, but can also help unlock tangible business value from the data through insights, such as who owns it, where it should live (on prem or in the cloud, which tier and when), how it should be protected and when it should be deleted.
STORAGE, SERVERS & HARDWARE
Another problem with current TCO calculations is the false assumption that the value of data decreases over time. They also don’t take into account other business variables such as the need for data viability, resiliency, security, and mobility. Organisations want to automate the use of different classes of storage based on policies defined by business requirements. Lack of automation forces them to move data manually or using custom scripts that are error prone and require upkeep. Both methods consume additional resources that aren’t accounted for in today’s TCO calculations.
Huge unstructured data growth has exacerbated the challenges organisations already face in storing large volumes of data using precloud, legacy architectures
Adding value beyond traditional storage TCO True TCO must incorporate the value of the data to the business, along with the cost and opportunity loss of managing the lifecycle of the data. Being storage efficient means achieving maximum productivity with minimum wasted effort or expense. The next generation of methodologies affecting system efficiency must be based on data insights and policy-based automation. Through data insights, organisations can eliminate the need for scripts that crawl their entire storage system trying to gather single point-intime statistics at the expense of application I/O and human resources. Capturing business intelligence gives organisations granular and relevant insights that enable timely access to data and provide insights into data-centric behaviour anomalies. With policy-based automation, organisations can control the data lifecycle by proactively moving data based on application requirements at that point in time – for example, whether it needs a performance tier, or the cloud to leverage elastic compute. It can also be used to protect and secure data based on compliance needs, or delete data after its defined useful life. By automatically and purposefully placing data where it will be most effective, this kind of automation can enable organisations to improve process completion times, increase the number of projects per resource and reduce wasted effort. Hidden cloud costs The growth of unstructured data has also led many organisations to rely on private or public cloud to address short- and long-term storage needs. However, the cost of storing data in the cloud can quickly spiral out of control due to ‘hidden’ costs that can be unpredictable and change with usage. For example, putting data in the public cloud is free, but egress can
result in unintended costs at multiple dollars per terabyte retrieved. If data is cold and stored only as an insurance policy, egress costs may not be noticeable. If data needs to be retrieved periodically, identifying and pulling only data that’s needed can minimise egress costs. Every piece of data retrieved that’s not relevant is money wasted. Organisations need the ability to categorise and organise data to ensure that only relevant data is consuming storage and compute resources. In addition, each public cloud has its own interface, which makes it hard to move data from cloud to cloud. Organisations need to be able to move data across clouds and premises based on metadata and tags without the need to interface with each one separately. Once data is in the cloud, it’s hard to track it. What data is being stored and why can be difficult questions to answer without knowing who created the data, who owns it, and what value it represents to the organisation. Every time a search is performed, there’s an associated cost in time and money. Organisations need a metadata and tags repository so that they can perform real-time searches without having to access the data or put a strain on computing, human, or budget resources. Once a file is moved to the cloud, if it needs to be accessed, it must be manually retrieved. Each operation may result in a cost, which can result in a significant financial expense over many operations. Data needs to be tagged based on how applications use it so that the data can be automatically copied or moved to where it’s most efficient to keep it. In some cases, a copy of the data may remain on premise while another copy is stored in the cloud, and data is delivered to the application from wherever is most efficient. Machine learning and read-ahead techniques can be used to minimise access times and data movement.
Once data is in the cloud, it’s hard to track it. What data is being stored and why can be difficult questions to answer without knowing who created the data, who owns it, and what value it represents to the organisation Calculating the true cost of storage management is clearly more complex than it seems. True TCO should account for increases in productivity, enabling organisations to take on a greater number of projects, improve customer satisfaction with decreasing time to market, and achieve greater output and higher revenue. Using data insights and automation, organisations can achieve operational efficiencies and effectiveness while managing their data’s lifecycle, putting them in the best possible position operationally, financially, and competitively.
Q2 2021 www.datacentrereview.com 31
STORAGE, SERVERS & HARDWARE
The circle of life Michael Rostad, global information technology director at Sims Lifecycle Services (SLS), explores the circularity of data centre hardware.
he world as it is today has become heavily reliant on technology, especially since the pandemic forced many to shift from in-person to virtual. Currently about 2.5 quintillion bytes of data are produced by humans every day, and that number is expected to rise to 463 exabytes by 2025. All this data must be processed and stored somewhere. Along with the increase in online data use and consumption, comes the growth in demand on data centre infrastructure and internet bandwidth. This increasing reliance may help explain why the worldwide server market has shown 19.8% year-over-year growth in Q2 of 2020. And it does not stop there. The global data centre server market is currently projected to grow at a CAGR of 14% between 2020-2024, according to the latest market research report by Technavio. Currently, data centres utilise an estimated six million tonnes of rack and server material. Considering the lifecycle of data centre servers averages between three to five years, an estimated two million tonnes of equipment is expected to be available for decommissioning each year. In a world where more servers are in use, more data is being processed and even more laws and regulations regarding data are put into place, there are many reasons why it is important to manage these devices during all stages of its lifecycle.
Reuse and recycling of storage, servers and hardware: Why is it important? Ensuring data destruction You are accountable for the data you store. When a user provides a company with any type of data, they expect their information will be adequately protected. As the person responsible for this role in your company, your job is on the line. When it comes to the security of IT assets, often managing their retirement is the weakest point in the organisation’s data security strategy. Many IT security strategies focus more on deployment and replacement of assets than their disposition, allowing for gaps in security which can occur due to: Inadequate data erasure: Unsuccessful data destruction can make previously stored data once again retrievable. Mismanagement of assets: There have been circumstances where disposition partners have violated their contract and resold devices instead of shredding them. Poor asset tracking: You cannot be confident that data destruction has been performed successfully if assets are not tracked and recorded properly. Your ITAD vendor should be able to locate assets at any time throughout the disposition process and verify methods of data destruction. This is usually made visible using an online ITAD portal. Extending the life of devices Get the most out of your equipment. One of the best things you can do
32 www.datacentrereview.com Q2 2021
for the environment is extend the life of your device. In high-demand settings equipment is replaced to maintain a certain level of speed and functionality. Replaced equipment is typically still in great working condition and could be useful in a different setting or for a different purpose. A professional IT asset disposition (ITAD) vendor will help you best develop a unique asset recovery strategy. This will help you discover how to best recover value from your equipment, usually through one or more of the following options: • Refurbishment and resale of devices through wholesale and retail channels • Refurbishment and redeployment within your business • Dismantling of devices and parts recovery of components for reuse • Responsible recycling of equipment that no longer has a useful life. When fulfilling your asset recovery strategy make sure you discuss with your ITAD vendor if there are additional opportunities to maximise your return on investment. Compliance with global data privacy laws 128 countries around the world have enacted legislation to secure the protection of data and privacy. The average data centre stores about 1,327 exabytes of data that must be securely managed until it is either removed or destroyed. While there are various data privacy laws around the world, some of the countries considered to have the heaviest data protection laws include: Austria, Australia, Belgium, Canada, France, Germany, Hong Kong, Ireland, Italy, Netherlands, Norway, Poland, Portugal, South Korea, Spain, Sweden, Switzerland, United Kingdom and the United States.
Currently, data centres utilise an estimated six million tonnes of rack and server material Your ITAD company should be able to offer expertise on which regulations and laws pertain to you depending on where you are located, and the facility nearest you that will process your material. Preserving our environment The future is circular. A circular economy occurs when resources are kept in use for as long as possible, maximum value is extracted while in use, and materials are recovered and regenerated at the end of their useful life. Many companies today have begun to embrace the principles of circularity and are setting goals to act on it. Some examples include the following: • Apple is currently carbon neutral and aims to have all its products carbon neutral by 2030.
STORAGE, SERVERS & HARDWARE
• Best Buy has committed to science-based targets to reduce its carbon emissions by 75% by 2030. • GM announced it will make only electric vehicles by 2035 and will be carbon neutral by 2040. • Microsoft announced a plan to be carbon negative by 2030. From an environmental perspective, efforts to reduce carbon emissions will support the preservation of our climate, and the reuse and recycling of electronics contributes to achieving these goals. A large amount of carbon is released when manufacturing new electronics. In comparison, when electronics are properly reused and recycled, only 10% of greenhouse gas emissions are released. What happens if this equipment is not managed efficiently? When replacing data centre hardware without a good policy or programme in place, items can tend to build up within a company. Data storage devices sitting in a room can remain there until someone finally has a need to dispose of it. This has, in some cases, caused these devices to end up in a dumpster bin. All organisations have a reputation to uphold and not managing these devices efficiently is important to prevent the following from happening: Negative long-term revenue impacts: If not managed efficiently, data centre hardware with your asset tags (or your client’s) could end up in the wrong hands, making you directly liable for their mismanagement. Insufficient data protection: Many data centre clients are demanding higher levels of security and auditing companies to ensure their data is protected. Waste of residual materials: From an environmental perspective, careful management of these materials will help eliminate the waste of all the residual materials within this redundant equipment.
What legal/environmental repercussions are there? The damage that can be done to a business will vary based on several different factors. The primary risk for legal repercussions comes from the liability for the information lost. It is important to be familiar with all local and regional regulations, but there are some that may affect you, no matter where you, or your business, is located. Not complying will generally result in hefty fines, reputational damage and environmentally you may be liable for clean-up costs. What changes are on the horizon that may impact how to dispose of e-waste? There are two big changes on the horizon which include stricter privacy rules and sustainable design. As data privacy requirements mature you should be prepared for audits with appropriate documentation to prove all assets have been managed securely. You must also focus on not just the data but the asset itself. Keeping a strong IT asset inventory list will help you when the time has come for disposal. From a sustainability perspective, as manufacturers work hard to support the circular economy sustainable design is making progress, with the goal to eventually ‘design out waste’. There is evidence of progress as ITAD vendors continue to see decreased use of hazardous materials, and sustainability is being discussed more as a part of the design process. The reuse and recycling of data centre hardware and IT equipment must be managed carefully. When data centres partner with the right data centre decommissioning vendor they can reduce their administrative overhead, simplify their vendor relations, and improve their overall accountability. Any professional data centre decommissioning vendor should be able to provide guidance to ensure you receive a comprehensive solution, covering all these needs.
Q2 2021 www.datacentrereview.com 33
Accidents happen It’s a fact of life, accidents happen, and we humans are not infallible. Here Philip Bridge, president at Ontrack, explores how to protect your data should the worst happen.
Today’s virtual IT environments are highly complex and have unprecedented levels of data streaming through them. They require diligent IT administration 24/7. Unfortunately, humans make mistakes. Teams are one accidental deletion or failed backup away from losing access to – or losing entirely – their data. The results of human error are wide and varied. Yet, they are all bad. Intellectual property can fall into the wrong hands, the organisation can suffer a data breach or face a crippling regulatory fine. It is, therefore, imperative that organisations invest in robust technology risk management policies. The definition of a breach The Information Commissioner’s Office (ICO) defines a data breach as, ‘any event that results in the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of, or access to, personal data.’ The keyword here is ‘accidental’. Accidents leading to a data disaster are more prevalent than many would care to admit. One survey found that the accidental deletion of information was the leading cause of data loss, driving 41% of cases. This is far above malicious hacking. Even if there is an attacker from outside the organisation behind a breach, human errors that have resulted in failed data backups could mean the company is without vital event log information that would articulate where the attack originated.
One survey found that the accidental deletion of information was the leading cause of data loss, driving 41% of cases. This is far above malicious hacking Most common accidents So, what are the most common accidents that lead to data loss and security vulnerability? A failure to document Whether a test server moves into production without informing IT that the data is not being backed up, or teams decommissioning a Storage Area Network (SAN) that is still in production, a failure to document and execute established IT, retention and backup procedures is one that we see time and time again. Accidental deletion The amount of time the delete key is mistakenly pressed is astonishing. It is important that organisations do their due diligence and ensure the data they delete is truly no longer of value. Failure to install patches Days are busy and resources are stretched. However, failing to update security patches can result in them being left open to evolving security breaches.
34 www.datacentrereview.com Q2 2021
Failure to backup effectively In a survey, we found that whilst three-in-five (60%) businesses had a backup in place at the time of loss, it was not working properly as thought. Unfortunately, the failure to establish and follow backup procedures, or test and verify backup integrity is a guaranteed recipe for data loss. Being lax with credentials It is important to restrict IT administrator passwords only to required users and change them when an IT administrator leaves the company. Don’t take chances. Some of the worst data loss cases we see result from a disgruntled employee with a live password intentionally deleting large amounts of critical company data.
Unfortunately, humans make mistakes. Teams are one accidental deletion or failed backup away from losing access to – or losing entirely – their data Data loss best practice What should IT departments do when the unfortunate happens to ensure the best chance of an effective resolution? Avoid panicking It is important if data loss happens that companies don’t restore data to the source volume from backup, because this is where the data loss occurred in the first place. They should also not create new data on the source volume, as it could be corrupted or damaged.
Trust your team Be confident in the skills and knowledge you have on your team. IT staff must educate the C-suite to avoid them making decisions that could do more harm than good. Have a plan Staff should follow established processes and ensure data centre documentation is complete and frequently revisited to ensure it is up to date. IT staff should not run volume utilities or update firmware during a data loss event. Know your environment IT staff must understand what their storage environment can handle and how quickly it can recover. Knowing what data is critical or irreplaceable, whether it can be re-entered or replaced, and the costs for getting that data up and running to a point of satisfaction are important. A compelling story Not enough organisations invest sufficient resources into developing bespoke policies based on risk. Mixed with the fact that accidents happen, and you’ve got a compelling story for the prevalence of data loss today. Prioritising hardware upgrades, rigorously testing and validating IT network processes, investing in skilled and experienced professionals, and enlisting a data recovery expert are fundamental precautions every business decision maker must consider. The complexity of managing today’s IT environments made even more dispersed by the global pandemic, combined with the growing amount of data that streams through them has required more diligent IT administration than ever. Unfortunately, humans are not infallible. In many ways, it is what makes us human. Therefore, it is time to acknowledge that accidents happen. It is how you deal with them that separates success from failure.
Q2 2021 www.datacentrereview.com 35
Riello UPS Sentryum series gets an upgrade with two new models
ninterruptible power supply manufacturer Riello UPS has expanded its transformerless Sentryum series with new 30 and 40 kVA versions. The two additions complement the 10, 15 and 20 kVA models already available in the Sentryum range,
which is the company’s third generation of transformer-free solutions. Delivering full-rated unity power (power factor 1) and up to 96.5% online efficiency, the Sentryum is designed to meet the needs of small and medium-sized data centres, as well as similarly mission-critical applications in the IT, telecoms, transport, and medical sectors. The two new 30-40 kVA models come with a choice of cabinet sizes to maximise battery autonomy and optimise floor space. The Active (ACT) chassis can house up to two battery strings in a footprint of just 0.35 m2, while the Xtend (XTD) option needs just 0.4 m2 space
and holds up to three battery strings. As well as its exceptional efficiency and compactness, the Sentryum also features a unique control system that helps minimise harmonic voltage distortion, while it has outstandingly high overload and short circuit capacity too. This enables the UPS to deal with sudden peak loads without having to transfer to bypass. Up to eight Sentryum UPSs can be paralleled together to increase capacity or redundancy. Riello • 01978 729 297 www.riello-ups.co.uk
Centiel CumulusPower – Safe-Hot-Swap capability
afe-Hot-Swap is not just the ability to exchange UPS modules on a live system, it is the facility for this to be completed safely and to mitigate any human error. CumulusPower’s Distributed Active Redundant Architecture (DARA) ensures any module being added to a system can be fully isolated and tested within a running frame before it accepts any load. In a system without safe-hot-swap, any issue with a module going into a live system could have catastrophic consequences and the load could be lost. CumulusPower also reduces TCO
through high double conversion efﬁciency of >97.1% and offers 99.9999999% (nine, nines) availability, reducing downtime to only 3.5 milliseconds per year. There is a significant difference between this and the most commonly used architecture of 99.9999% (six nines) that allows for 31.5 seconds of downtime. This means that CumulusPower is currently the safest and most reliable UPS available for power protection. Centiel • 01420 82031 www.centiel.co.uk
Amito announces Harwood as minority investor in £38.7 million deal mito Ltd has secured significant minority investment including follow-on funding from Harwood Private Equity to support rapid growth. The data centre owner and operator, known for its award-winning Tier III facility in Reading, is looking to build on its continuing success with ambitious plans for expansion. Harwood is supporting these plans through an initial investment and through further committed capital. Amito CEO, Ed Butler commented, “We are delighted to announce the investment partnership with Harwood. We believe we have secured a partner who shares our vision
36 www.datacentrereview.com Q2 2021
for the business, has extensive experience in our industry, and is committed to supporting our goals with their expertise. We are proud
of Amito’s success so far and are keen to realise the next phase of our growth strategy alongside Harwood.” Jeremy Brade, Partner at Harwood Private Equity added, “Amito has proven itself to be an exceptional regional data centre, delivering consistently distinguished service to its growing customer base. We have been impressed with the Amito management team, its achievements to date and its powerful commitment to accelerate growth and expand. We look forward to working alongside the team to achieve these goals.” Amito • 0118 380 0599 www.amito.com
Schneider extends 3-Phase Easy UPS 3L from 250 kVA to 600 KvA
chneider Electric has extended its Easy UPS 3L from 250 kVA to 600 kVA (400V) with the addition of 250, 300, and 400 kVA 3-phase Uninterruptible Power Supplies (UPSs) for external batteries. The Easy UPS 3L simplifies and streamlines configuration and service, delivering high availability and predictability to medium and large commercial buildings and light industrial UPS applications. With its compact footprint, highly available parallel and redundant design, and robust electrical specifications, Easy UPS 3L protects critical equipment in a wide range
It is up to 96% efficient to bring predictability to utility costs. Easy UPS 3L includes a wide battery voltage window and accommodates a variety of battery configurations. It comes with a full range of options and accessories making it easy to integrate into different environments. Customers benefit from Schneider’s global service setup with strong local networks of service specialists that provide customers with a complete range of services throughout the entire Easy UPS 3L lifecycle. of environments from damage due to power outages, surges, and spikes.
Schneider Electric • 0870 608 8608 www.se.com
How Xilinx is revolutionising the modern data centre
ilinx Inc. has announced a range of new data centre products and solutions, including a new family of Alveo SmartNICs, smart world AI video analytics applications, an accelerated algorithmic trading reference design for sub-microsecond trading, and the Xilinx App Store. Today’s most demanding and complex applications, from networking and AI analytics to financial trading, require low-latency and real-time performance. Achieving this level of performance has been limited to expensive and lengthy hardware development.
With these new products and solutions, Xilinx is eliminating the barriers for software developers to quickly create and deploy software-defined, hardware accelerated applications on Alveo accelerator cards. “Data centres are transforming to increase networking bandwidth and optimise for workloads like artificial intelligence and real-time analytics,” said Salil Raje, executive vice president and general manager, Data Centre Group at Xilinx. “These complex, compute-intensive and constantly-evolving workloads are pushing existing infrastructure to its limits and driving the need
for fully composable, software-defined hardware accelerators that provide the adaptability to optimise today’s most demanding applications as well as the flexibility to quickly take on new workloads and protocols, and accelerate them at line rate.” Xilinx • +1 408 559 7778 www.xilinx.com
Lenovo assists in introducing industry’s first commercially available HPC micro data centre solution vnet, Schneider Electric, and Iceotope have been joined by the experts at Lenovo to bring us the industry’s first commercially available HPC micro data centre solution, cooled by integrated chassis-level immersion. Lenovo is to deploy its Lenovo ThinkSystem SR670 servers in a highly scalable, GPU-rich, liquid-cooled micro data centre solution. Sealed at the chassis level, the new solution enables artificial intelligence (AI), machine learning (ML), and high-performance computing (HPC) workloads to be deployed
in close proximity to the location of data generation and use regardless of how harsh the environment is. Avnet is providing the integration services
for the solution on behalf of Schneider Electric and Iceotope to help them deploy globally to a wide range of customers. Avnet will convert the Lenovo ThinkSystem SR670 to liquid cooling and integrate Schneider Electric’s APC NetShelter liquid-cooled enclosure system. A full portfolio of lifecycle services such as warranty support, field installation and maintenance, advance exchange, repair/refurbishment, and IT asset disposition and value recovery will also be available. Lenovo • 020 3014 0095 www.lenovo.com
Q2 2021 www.datacentrereview.com 37
Facial recognition: Is the technology already out there fit for purpose?
For a while now, facial recognition technology has been touted by law enforcement as helping to fight crime. But considering the only stock image I could find for this article was of a white male, is strict regulation needed to prevent human rights violations?
he idea behind biometric technology, such as facial recognition, is to improve the ‘quality and efficiency’ of policing, whilst reducing costs. Sadly, whenever I hear the words ‘to reduce costs’, all I think is ‘to cheap out’. However, that doesn’t seem to have been the case here, as back in 2019, the Home Office did plan to invest £97 million into a wider biometric strategy to help keep us safe. But when this technology is only helping some of us, is that still beneficial, or just a disaster waiting to happen? The Council of Europe has called for strict rules to avoid the significant risks to privacy and data protection posed by the increasing use of facial recognition technologies. Furthermore, it has suggested certain applications of facial recognition should be banned altogether to avoid discrimination. In a new set of guidelines addressed to governments, legislators and businesses, the 47-state human rights organisation proposes that the
38 www.datacentrereview.com Q2 2021
use of facial recognition for the sole purpose of determining a person’s skin colour, religious or other belief, sex, racial or ethnic origin, age, health or social status should be prohibited, and rightly so. This ban should also be applied to ‘affect recognition’ technologies. This is an utterly terrifying concept, wherein the technology can actually identify emotions and be used to detect personality traits, inner feelings, mental health conditions or workers’ level of engagement. Since these things still unfortunately pose important risks in fields such as employment, access to insurance and education, ‘affect recognition’ should certainly get the boot. “At its best, facial recognition can be convenient, helping us to navigate obstacles in our everyday lives. At its worst, it threatens our essential human rights, including privacy, equal treatment and non-discrimination, empowering state authorities and others to monitor and control important aspects of our lives – often without our knowledge or consent,” said Council of Europe secretary general, Marija Pejčinović Burić. “But this can be stopped. These guidelines ensure the protection of people’s personal dignity, human rights and fundamental freedoms, including the security of their personal data.” The guidelines state that a democratic debate is needed on the use of live facial recognition in public places and schools, in light of their intrusiveness, and possibly also on the need for a moratorium pending further analysis. The use of covert live facial recognition technologies by law enforcement would only be acceptable if strictly necessary and proportionate to prevent imminent and substantial risks to public security that are documented in advance. Private companies should not be allowed to use facial recognition in uncontrolled environments, such as shopping centres, for marketing or private security purposes. The guidelines were developed by the Consultative Committee of the ‘Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data’, which brings together experts representing the 55 state parties to the Convention, as well as 20 observer countries. The Convention, the first ever binding international treaty addressing the need to protect personal data, was opened for signature in Strasbourg 40 years ago, on January 28 1981. But despite this happening back in the 80s, there is overwhelming evidence to suggest that facial recognition technology in use today, is not fit for purpose. In fact, facial recognition technology as it stands is actually more likely to pick up on darker skinned faces. A growing body of research has exposed divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, black and 18-30 years old. If this is still the case, it really does beg the question why this technology was rolled out in the first place?
The invaluable resource for electrical professionals informing the industry for over 140 years Register now for your free subscription to the print and digital magazines, and our weekly enewsletter
SUBSCRIBE FOR FREE TODAY