Page 1


data centre news

February 2018



D342 21-22 March 2018 ExCeL London

inside... Special feature Infrastructure Management

Centre of Attention

Monica Brink of iland explains why preparing for Brexit and GDPR could be easier in the cloud.

Meet Me Room

Mat Clothier of Cloudhouse discusses his motivation for what he does and… editing the Teletubbies.


in this issue…

data centre news

February 2018



04 Welcome

36 Disaster Recovery

Optimise your organisation.

Christophe Bertrand of Arcserve discusses the threat of ransomware and the importance of disaster recovery.

07 Industry News

D342 21-22 March 2018 ExCeL London


Centre of Attention

Special feature Infrastructure Management

Monica Brink of iland explains why preparing for Brexit and GDPR could be easier in the cloud.

Meet Me Room

Mat Clothier of Cloudhouse discusses his motivation for what he does and… editing the Teletubbies.

Cloudscene reveals the world’s leading data centre operators for 2018.

38 IoT

14 Centre of Attention

Nick Sacke of Comms365 tells us why he believes shrink-wrapped services are the key to realising the vision of IoT at scale.

SPECIAL FEATURE: Infrastructure Management

40 G.Fast Technology

26 Is East to West your data

Monica Brink of iland explains why preparing for Brexit and GDPR could be easier in the cloud.

17 Meet Me Room Mat Clothier of Cloudhouse discusses how to make it into the industry, his motivation for what he does and… editing the Teletubbies.

20 Case Study Amdocs and Eaton work together to build and open a new data centre site in just 90 days.

22 Product Launch Find out how your business could save with Vertiv’s new energy management initiative.

34 Cloud Andrew Brinded of Nutanix discusses the pros, cons and considerations when it comes to choosing public or private cloud.

Phillip Havens of Littelfuse explains how technology is speeding up access to the cloud.

42 Projects and Agreements Spectra Logic chosen to preserve ITV’s digital content.

48 Company Showcase The new Intel Xeon D-2100 processor enabling new capabilities for cloud, network and service providers.

50 Final Thought Chris Wellfair of Secure I.T. Environments looks at a way to expand your data centre that you may have considered out of reach.

centre’s danger zone? Nathaniel Wallis of Axial Systems discusses how to keep lateral data traffic protected.

28 Faisal H. Usmani, of Cyient

Europe, explains how SDN can help keep pace with ever changing network demands.

30 Mark Gaydos of Nlyte

Software explains why a multi-faceted approach is the key to security success.

32 Darren Watkins of Virtus Data Centres discusses why a lack of knowledge when it comes to DCIM could be preventing your business from reaping the rewards.

February 2018 | 3

data centre news

Editor Claire Fletcher

Sales Director Ian Kitchener – 01634 673163

Studio Manager Ben Bristow – 01634 673163

Designer Jon Appleton

Business Support Administrator Carol Gylby – 01634 673163

Managing Director David Kitchener – 01634 673163

Accounts 01634 673163

Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173

The editor and publishers do not necessarily agree with the views expressed by contributors nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2017. All rights reserved.

4 | February 2018


ith less than 100 days to go until the EU GDPR hits, apparently, most of us are ready – 72% of us globally in fact, according to a recent study by EfficientIP. Here in the UK, despite the (forever) ongoing Brexit negotiations and general uncertainty over the enforcement and effectiveness of GDPR on local businesses, we are the most confident nation in Europe, with nearly two thirds of UK businesses saying they will be ready by the deadline. Well done us. Although, perhaps if ‘Brexit negotiations’ weren’t taking so long, we wouldn’t have needed to comply in the first place. Does anyone even know what is happening with that? Anyway, I digress. Elsewhere in the world, North America is the most confident region globally, and it seems German organisations the least. Come on Germany, we have faith in you! In last month’s issue, our special feature was about all things GDPR, so this month we’ve shifted the focus to infrastructure management, which ironically, has become even more prevalent than ever with GDPR on the way.

Claire Fletcher, editor

For an organisation’s IT, infrastructure management spans a plethora of different areas, and is essentially the management of business components such as policies, processes, equipment, data, human resources, and external contacts, to name but a few. The idea being that these are monitored in a way that ensures the overall effectiveness of your business. For organisations caught between managing IT infrastructure and the business pressures to go digital, the solution lies in optimising said IT infrastructure and the applications related to day-to-day business operations. Optimising your infrastructure will ultimately result in cost savings and increased operational efficiencies, meaning the business becomes more agile, enabling you to keep pace with the rapid changes that are constantly occurring in the digital world. For anyone who still fancies their GDPR fix, in this month’s Centre of Attention, Monica Brink of iland explains how the cloud is our friend when it comes to getting prepared. Should you have any questions or opinions on the topics discussed please write to: Claire.Fletcher@





u ee

21-22 March 2018 Excel London

at the VIP Lounge

The Global Leader in Technical Education for the Digital Infrastructure Industry Designing and delivering technical education programs for the data centre and network infrastructure sectors across the world


300 education programs delivered each year

Associate College

education available in


countries across the globe UK Headquarters: +44 (0)1284 767100



city locations each year

1 - 2 May 2018

Emirates Old Trafford, Manchester





Recognising the need – Reflecting the market This year we have had announcements of new builds in Birmingham, Hull, Manchester, Leeds and Liverpool… And this growth is set to continue. DataCentres North is the largest and most complete event outside London addressing the needs of all those involved in the ownership, design, build, management, operation and infrastructure needed to deliver effective datacentres, server and comms rooms.

The Exhibition

Featuring the country’s leading suppliers this is your opportunity to see, discuss and source the latest in products, services, solutions that can benefit your business and assist you to achieve your goals.

The Conference - Content is Key


Sample of Speakers Mark Acton - Head of Datacentre Technical Consulting & BCS - DCSG, CBRE - DCS & Professor

DataCentres North Conference addresses both the Strategic and Operational issues affecting the region, its growth as well as the challenges facing those involved in operating effective, efficient, secure and resilient datacentres, server and comms rooms.

Ian Bitterlin MD, Critical Facilities Consulting & BCS - DGSG, Council Member

The programme will address: • Datacentre Design • Energy & Sustainability • Direct Liquid Cooling • DCIM • Legislation • SLA’s and Governance • Financing • Regional Developments • Standards • Power • Connectivity

Fergus Innes - MD, Ireland France Subsea Cable Steve Bowes-Phipps - Senior Data Centre Consultant, PTS Consulting Emma Fryer - Associate Director, tech

Social Networking Dinner DataCentres North will once again hold a social networking dinner on the first evening of the event. All tickets include a drink on arrival, 3 course meal and half a bottle of wine. Tickets are priced at £55 + VAT per person. Table of 10 priced at £495 + VAT (which includes a 10% discount)

For the latest information visit or contact the DataCentres Team: 01892 518877 or email:

Register online now: Supported By :

Organised in Association with:

industry news

Research finds digital transformation drivers and KPIs don’t stack up Frequency and complexity of DDoS attacks on the rise Netscout has released its 13th Annual Arbor Worldwide Infrastructure Security Report (WISR), which focuses on the operational challenges network operators face daily from cyber threats and the strategies adopted to address and mitigate them. “Attackers focused on complexity this year, leveraging weaponisation of IoT devices while shifting away from reliance on massive attack volume to achieve their goals. They have been effective, and the proportion of enterprises experiencing revenue loss due to DDoS nearly doubled this year, emphasising the significance of the DDoS threat,” said Darren Anstee, chief technology officer, Netscout Arbor. The exploitation of IoT devices and innovation from DDoS attack services are leading to more frequent and complex attacks. 57% of enterprises and 45% of data centre operators saw their internet bandwidth saturated due to DDoS attacks. Successful DDoS attacks are having greater operational and financial impact, with 57% citing reputation/brand damage as the main business impact, with operational expenses second. 56% experienced a financial impact between $10,000 and $100,000, almost double the proportion from 2016. 88% of service providers utilise Intelligent DDoS Mitigation Solutions and 36% are utilising technology that automates DDoS mitigation. Increased investment in specialised tools automation is driven by the sheer number of attacks faced in service provider networks. Netscout,

76% of CIOs say it could become impossible to manage digital performance as IT complexity soars Dynatrace has announced the findings of an independent global survey of 800 CIOs, which reveals that 76% of organisations think IT complexity could soon make it impossible to manage digital performance efficiently. The study further highlights that IT complexity is growing exponentially; a single web or mobile transaction now crosses an average of 35 different technology systems or components, compared to 22 just five years ago. This growth has been driven by the rapid adoption of new technologies in recent years. However, the upward trend is set to accelerate, with 53% of CIOs planning to deploy even more technologies in the next 12 months. The research revealed the key technologies that CIOs will have adopted within the next 12 months include multi-cloud (95%), microservices (88%) and containers (86%). As a result of this mounting complexity, IT teams now spend an average of 29% of their time dealing with digital performance problems; costing their employers $2.5 million annually. As they search for a solution to these challenges, four in five (81%) CIOs said they think Artificial Intelligence (AI) will be critical to IT’s ability to master increasing IT complexity; with 83% either already, or planning to deploy AI in the next 12 months. Dynatrace,

Ensono, and the Cloud Industry Forum (CIF) have released results from the latest digital transformation study revealing that while cost saving was the most common driver for digital transformation, success metrics do not match. The research asked 250 business and IT decision makers in enterprise organisations across multiple vertical sectors about their digital transformation experiences. When asked about how they are measuring success only 51% said they were measuring cost savings, despite 70% saying this was a key driver. The most cited KPI was customer satisfaction with 52% saying they were measuring this versus just 40% saying it was a driver. The top three drivers for digital transformation were cost savings (70%), increasing productivity (59%) and increasing profitability (58%). Drivers lower down the list included competing with industry disruptors (35%), differentiation (35%), speeding up time to market (33%) and customer experience (40%). Almost all (99%) are intending to measure success in some way and the majority said that the current value achieved was either in-line with expectations or, in almost half of cases (48%), higher than expected. “For digital transformation strategies to succeed, the IT department, the business and the board need to have a clear and shared vision, and that vision needs to focus on people first, with the right technology facilitating,” said Simon Ratcliffe, principal consultant at Ensono. Ensono, Cloud Industry Forum,

February 2018 | 7

industry news

Gartner says global IT spending to reach $3.7 trillion in 2018 Worldwide IT spending is projected to total $3.7 trillion in 2018, an increase of 4.5% from 2017, according to the latest forecast by Gartner. “Global IT spending growth began to turn around in 2017, with continued growth expected over the next few years. However, uncertainty looms as organisations consider the potential impacts of Brexit, currency fluctuations, and a possible global recession,” said John-David Lovelock, research vice president at Gartner. “Despite this uncertainty, businesses will continue to invest in IT as they anticipate revenue growth, but their spending patterns will shift. Projects in digital business, blockchain, Internet of Things, and progression from big data to algorithms to machine learning to artificial intelligence (AI) will continue to be main drivers of growth.” Enterprise software continues to exhibit strong growth, with worldwide software spending projected to grow 9.5% in 2018, and it will grow another 8.4% in 2019 to total $421 billion (see Table 1). Organisations are expected to increase spending on enterprise application software in 2018, with more of the budget shifting to software as a service (SaaS). The growing availability of SaaS-based solutions is encouraging new adoption and spending across many subcategories, such as financial management systems and human capital management. Gartner, 2017 Spending

2017 Growth (%)

2018 Spending

2018 Growth (%)

2019 Spending

2019 Growth (%)







Enterprise Software














IT Services







Data Centre Systems

Communications Services







Overall IT







Table 1. Worldwide IT Spending Forecast (Billions of US Dollars)

Cloudscene reveals the world’s leading data centre operators and the Fast 50 Markets to Colocate in 2018 Cloudscene has released the results of the Q4, 2017 Data Centre Operator Leader Board, and revealed the Fast 50 Markets to Colocate in 2018. For the fourth consecutive quarter, Equinix has dominated all leader board regions, retaining the number one position across the North America, EMEA, Oceania and Asia markets. The colocation giant demonstrated its strength throughout the 2017 year, with Equinix’s overall leaderboard score growing 9.04% from Q1 to Q4, 2017. Second-placed Interxion also grew significantly, with a 24.67% increase in their EMEA leader board score since Q1, 2017. The final leader board for 2017 resulted in considerable movement due to the introduction of new eligibility criteria, which requires all leader board entrants to be carrier-neutral data centre operators. In addition to the neutrality criteria, data centre operators are ranked according to data centre density (total facilities in the region) and connectivity (total service providers across all data centres in the region). This quarter, Cloudscene has also released the Fast 50 Markets to Colocate in 2018. Based on total data centres, service providers and network fabrics, Europe dominated the top three positions with London, Frankfurt and Amsterdam ahead of Washington DC and Hong Kong, who appeared in fourth and fifth place respectively. Cloudscene, 8 | February 2018

industry news

Information Age has given way to Interconnected Era, finds Equinix survey

A new white paper by Quocirca on behalf of data centre operator Next Generation Data (NGD) alerts user organisations to the performance risks of using the public internet for providing wide area (WAN) connectivity between their private, public and legacy environments. The report also indicates the alternative of using dedicated WAN links will prove cost prohibitive for many. These concerns are highlighted by market analyst firm Quocirca in the white paper, which discusses the benefits and pitfalls of hybrid cloud implementations and the increasing importance of colocation data centres for delivering optimum hybrid performance. Taking the recently launched Microsoft Azure Stack hybrid cloud solution as an example, it advises using a colocation facility rather than on premise in order to minimise latency when connecting to Azure public cloud services. As part of this, it emphasises the need for colocation data centres with sufficiently dense power availability and direct gateway connections into Microsoft’s dedicated ExpressRoute WAN. However, it warns there are currently very few data centres with such capabilities, potentially exposing organisations to facilities that are purely points of presence (PoP) along the ExpressRoute. Clive Longbottom, Quocirca’s principal research analyst said, “The obvious solution is to find a colocation provider who is also an ExpressRoute termination point. Here, the end customer takes space within the colocation facility and places their Azure Stack equipment within it. Using intra-facility connectivity speeds, they then connect through to the ExpressRoute environment, giving an Azure Stack/Azure Public Cloud experience that is essentially one consistently performing platform.”

The Interconnected Era has replaced the Information Age, according to the results of an independent survey commissioned by Equinix. Just over half of those surveyed (51%) state that the Interconnected Era has taken over from the Information Age. The survey of senior IT decisionmakers also found that 80% of those who use the public internet to exchange data with employees, partners, customers and others outside of their core infrastructure are concerned about the security of their data. The survey makes it clear that, as more enterprises migrate data and services to the cloud, interconnection with leading global digital service providers is a key business enabler. Four out of five (79%) of those surveyed said that they consider it important for their business to have direct interconnection on the same network with service providers such as Google, Amazon and Microsoft. As well as this, 89% of respondents agreed that it is important for their business to connect with external digital ecosystems in order to scale. Just 11% said that interconnection was not important to scaling their business. Maurice Mortell, Equinix managing director for Ireland and emerging markets said, “To maintain a competitive advantage, organisations are embracing digital transformation and interconnecting with one another, key service providers and customers.”



Analyst report warns users of hybrid cloud performance issues

February 2018 | 9

industry news

Cloud growth rate increases; Amazon, Microsoft & Google all gain market share New Q4 data from Synergy Research Group shows that spend on cloud infrastructure services jumped 46% from the final quarter of 2016, comfortably beating the growth rates achieved in the previous three quarters. A large part the expansion was driven by aggressive growth of Amazon (AWS), Microsoft, Google and Alibaba, who all increased their share of the worldwide market at the expense of smaller cloud providers. AWS maintained its dominant position with revenues that exceeded the next four closest competitors combined, despite huge strides being made by Microsoft. A notable change in Q4 was that a doubling of cloud revenues at Alibaba enabled it to join the ranks of the top five operators for the first time. Meanwhile IBM maintains its position as the third largest cloud provider in the market, thanks primarily to its strong leadership in hosted private cloud services. With most of the major cloud providers having now released their earnings data for Q4, Synergy estimates that quarterly cloud infrastructure service revenues (including IaaS, PaaS and hosted private cloud services) have now reached well over $13 billion, with full-year 2017 revenues

having grown 44% from the previous year. Public IaaS and PaaS services account for the bulk of the market and they grew by 50% in Q4. In public cloud the dominance of the top five providers is even more pronounced, as they control almost three quarters of the market. Synergy Research Group,

451 Research: 60% of enterprises shift to off-premises IT by 2019 In its inaugural Voice of the Enterprise (VotE) Digital Pulse survey, 451 Research finds that IT leaders are embracing a new model of off-premises, service-oriented IT solutions and will be looking to harness data in new ways to differentiate themselves in 2018. Respondents revealed that the top three IT initiatives for 2018 are all data-centric: Business

intelligence, machine learning/artificial intelligence and Big Data. The survey finds that IT organisations’ ability to exploit digital transformation is uneven, with over 60% of organisations having no formal transformation strategy in place and many admitting they face challenges in achieving optimal businessIT alignment.

60% of enterprises surveyed for Digital Pulse say they will run the majority of their IT outside the confines of enterprise data centres by the end of 2019, chiefly using off-premises service provider environments such as public cloud infrastructure and SaaS. Accordingly, the largest spending increase in 2018 is for IT delivered ‘as a service,’ at the expense of the traditional on-premises model. 451 Research finds information security is also high on the IT agenda, with 16% of organisations saying that area is getting the largest budget increase. Providers such as Microsoft and Amazon Web Services (AWS) are emerging as enterprises’ most strategic technology suppliers; 35% of organisations say Microsoft will be their most strategic partner by the end of 2019, compared to 33% today, while 17% say AWS will hold that position two years from now compared to 7% today. 451 Research,

10 | February 2018

POWER DISTRIBUTION UNITS • Vertical/Horizontal Mounting • Combination Units • Power Monitoring • Remote Monitoring • Rated at 13A, 16A and 32A • Bespoke Units • Robust Metal Construction • Availability From Stock • Next Day Delivery

! E R E H T E B WE’LL D342

21-22 March 2018 ExCeL London

Designed and Manufactured in the UK

+44 (0)20 8905 7273

on the cover

Industry Insight A Q&A with Marissa Maxwell, business development manager at Olson, highlighting industry challenges, changes and what she thinks the future holds for the data centre. What have been the most significant changes to the industry over the past five years? In the past five years, the industry has changed dramatically in terms of computer and security networks, I notice more and more people are coming off the typical cat5e, 6 and 6a, and moving more towards fibre and trying to future-proof the network wherever possible. This makes perfect sense as it eliminates any time wasting on continuous upgrades on 12 | February 2018

a daily or weekly basis as we move forward. There is no way people can stay in the dark ages anymore, after all, technology is only ever evolving, and will continue to do so. What have been/are the main challenges for your sector? Our challenges are mainly to do with the type of PDUs now available, we are talking about plastic, with low budget components inside, unlike the robust steel construction with

high quality components we are used to. Power is the most critical part of a network, why would you cut corners on the quality of the source you need most? These are our questions, and time and time again we will get customers coming back to us after buying a lower cost PDU, reporting that it has failed, and requesting how quickly can we turn it around to ship a brand-new Olson PDU out to them, which can sometimes be challenging.

on the cover

Where do you see the data centre industry heading in the future? In terms of PDUs and changes there, I am noticing the demand for intelligence more and more. I think that in the future there will be a lot more intelligent plug and play solutions than there are currently, purely because they are so much quicker and easier to install. Various copper and fibre trunks in this style are often used, saving lots of time. However, this time saving technology does come at quite a price. Technology and solutions for data cabling infrastructure will just keep getting more expensive as we move along and networks are upgraded. A PDU is only one small part of the network infrastructure, but as the industry as a whole is evergrowing, so is the PDU market alongside this.

Another challenge for the data centre is the cost of energy, if you imagine a data centre with banks and banks of cabinets, and every cabinet has power and various other equipment connected to that power, and this power is switched on all day/all night, you’re looking at some hefty costs with regards to energy when it is being used constantly, and new cabinets are being put inside the data centres continuously.

What will be the market drivers for the future? I think one of the biggest market drivers will be versatility with regards to products, as opposed to having so many different products (PDUs), perhaps the way forward would be one product that is entirely interchangeable with different add ons/modules. This is not only cost effective for manufacturers like ourselves, but also cost effective for the customer. Perhaps if they had one product capable of eight different configurations rather than eight separate products with different

functions, this would be a smart solution to offer and a real value adder for Olson’s portfolio.

“Power is the most critical part of a network, why would you cut corners on the quality of the source you need most?”

How is Olson keeping abreast of current and future industry developments, and what lies ahead for the company? We get a lot of enquiries for three phase PDUs and intelligent PDUs, which is something we did not have in the past and we are proud to say is coming soon. If we are going to stay ahead of the game we must have the latest technology in our portfolio too. Whilst we offer hundreds of different variations of PDU, the intelligence is something missing and a critical part for a lot of our customers, especially within the data centre industry. The more intelligence the PDU has naturally, the less people need to do to run networks successfully. The intelligent PDU range from Olson will launch later in the year, bringing with it a wider offering to our data centre customers. Our non-DC customers are also giving us a chance to show off just what else can be done with our PDUs. We are very proud to be a British manufacturer and we are only ever evolving ourselves, expanding the business and our portfolio. Olson customer service is second to none and has been for the past 50-plus years, and that is one thing about Olson that will never change. The fact that our lead times are so short and we can often manufacture within a few hours, allowing for next day delivery, is a huge strength of Olson. Our can-do attitude and speed, combined with our attentive nature and expertise is what keeps our customers returning again and again, and we wouldn’t have it any other way.

February 2018 | 13

centre of attention

Up in the Cloud The double whammy of GDPR and Brexit has sparked much confusion over what effect this will have on businesses. However, don’t despair, Monica Brink, director EMEA marketing at iland is here to explain how preparing for Brexit could be a lot easier in the cloud.


here is much uncertainty around how the new EU General Data Protection Regulation (GDPR) that comes into effect in May, 2018 will work alongside Brexit. Both UK and EU based companies must anticipate tremendous changes to policy and politics. Many questions are being put on the table for debate. Will GDPR be as easily enforceable in the UK after Brexit? Will the terms

14 | February 2018

of leaving the EU have an impact on GDPR? On the whole, there is natural concern that the combined impact of Brexit and GDPR will put a strain on UK businesses. There is also the question of how the new regulations will apply to international companies setting up offices in the EU. Gartner research indicates that Brexit will have a massive impact on how IT budgets will be spent in 2018. Organisations

will want greater flexibility and as such will be spending more on software and cloud-based services rather than making long-term, upfront investments. Brexit will accelerate the adoption of cloud-based services as companies prepare for an uncertain economic environment. Cloud services have been proven to help control IT spend by moving IT CAPEX to OPEX, which will be increasingly important.

Image courtesy of layerv

centre of attention

“Brexit will accelerate the adoption of cloudbased services as companies prepare for an uncertain economic environment.”

In short, businesses will understandably be more cautious with IT spend, and this will drive them towards cloud services. In pursuit of greater flexibility and controlled IT spending in the pre-Brexit environment, companies will want to avoid lock-in with any one cloud provider. When choosing cloud providers, flexible pricing structures will be a high priority, as will the need to avoid paying for unused cloud capacity.

Operating in a hybrid cloud model, with some workloads in an on-premises data centre configured in a private cloud and others deployed to the public cloud, offers many advantages. The hybrid cloud model benefits organisations that have different workloads with different characteristics and requirements. For example, applications that have predictable capacity demands and regular, ongoing usage are often kept on-premises. Meanwhile, apps that have seasonal variability and can clearly benefit from highly scalable, ondemand capacity are deployed to the cloud. Having this hybrid cloud model will give companies the flexible, scalable options they need to adapt to uncertain business environments. In addition, in the uncertain regulatory environment, companies will look for a cloud provider that can not only offer cloud services, but also support across the full lifecycle of a service from pre-sales, to onboarding, to ongoing management. Many organisations will need support from their cloud providers to navigate changing compliance requirements. Providers will need to outline crystal clear paths to cloud compliance and data privacy regulations, including ondemand security and compliance reports, that give customers full visibility and control of their cloud environments and can help them pass data privacy and other compliance audits. Brexit further complicates cloud compliance and data privacy concerns, as data privacy rules for the EU and the UK could change in the future. This makes it even more imperative that UK organisations remain up to date with data privacy and cloud security requirements. Larger companies often require a Model Contract Clause as part of their cloud agreements, to ensure

the liability of a breach is shared between the cloud provider and the customer. Yet it is not only stormy skies ahead. Brexit will also create new opportunities for businesses, by allowing for the emergence of new markets, new trade partners, evolving domestic demand, and so forth. As 2018 progresses, cloud services will help companies be agile enough to take advantage of these opportunities without exhausting their IT budgets. They will be able to quickly spin up new workloads, test new apps and add more IT capacity and scalability in the cloud. Recent Forrester research tells us that cloud providers are continuously expanding their offerings in the European market, and that onpremises private cloud adoption in the UK has jumped from around a third in 2016 to more than 60% in 2017. Forrester speculates that this ‘sudden enthusiasm’ for private cloud could illustrate how Brexit is prompting businesses to adopt cloud more quickly than they otherwise would, as they prepare for every possible Brexit outcome. To avoid high upfront investments during this preparation period, businesses can take advantage of services such as Infrastructure-as-a-Service (IaaS) and Disaster-Recovery-as-a-Service (DRaaS). IaaS allows for the hosting of development, testing and production workloads in the cloud. It also allows businesses to scale up and down on demand and increase capacity without investing in on-premises IT infrastructure. DRaaS gives the option of replicating IT apps and data to the cloud for failover in the event of a disaster to ensure business continuity. In this way, businesses can avoid investing in a second data centre location for DR. If you want to learn more, join iland at Cloud Expo Europe on March 21 and 22, at Excel London. February 2018 | 15

Silicon Protection

Fuse 881

TVS Diode

PolySwitch PTC

PROTECTING THE HIGHWAY TO THE CLOUD Littelfuse Datacenter & Cloud Infrastructure Solutions Today’s datacenters must handle the vastly higher volumes of data being produced by millions of mobile devices and the objects that make up the Internet of Things (IoT). To keep these critical facilities running properly, datacenter operators need advanced circuit protection, sensing, switching and power management components. Littelfuse is committed to supplying datacenter developers and managers with the fuses, PTCs, GDTs, varistors, sensors, thyristors, TVS diodes and arrays, IGBTs and all the other components they need to keep these facilities operating efficiently and reliably and to protect their assets. For product and application guidance information please visit:

meet me room

Mat Clothier - Cloudhouse Man of many talents Mat Clothier, CEO, CTO & founder of Cloudhouse is here to discuss how to make it into the industry, his motivation for what he does and… editing the Teletubbies - yes you heard right. What were you doing before you joined Cloudhouse, and how did you first get involved in the industry? I’ve been in the IT industry for my entire working life and my interest for IT started when I was very young, in fact, I used to build computers as a kid! As a teenager I worked in a computer shop, before going to university to study computing. I then started an IT business, so it’s all I’ve ever done. Prior to Cloudhouse, I ran an IT services business, in which we ran people’s infrastructure on-premises and built data centre infrastructures for them. I ran a small colocation data centre facility for some time, and then I sold that to a software vendor in the US. Finally I moved into building infrastructure software, which is what we’re doing now.

Can you tell us about any projects you are currently working on? We’re working with a number of large enterprise organisations at the moment, and in particular some central government organisations, which have had a sprawl of different data centres, where everything is running across different providers, different outsourcers, and we’re helping to consolidate those. One of the things that drives diversity, is the fact that organisations are having to run legacy systems, and when they want to invest in the latest Windows system, they’re discovering that they can’t bring everything with them, because it’s tied to the legacy version. They’re locked in wherever it’s running today. Our aim is to consolidate everything into one modern system.

Mat loves motorsports and even has a large motor manufacturer as a client

What is the main motivation in the work that you do? Well there’s the normal commercial motivation, of course. But the thing that I find most interesting is that I get frustrated by illogical inefficiencies that people have. I believe that if there’s a better way to deliver services, you feel like you can help others in some way - it drives me mad when people don’t update their tech for illogical reasons. I find it very satisfying when we are able

February 2018 | 17

meet me room

Image courtesy of BBC

to improve efficiency with what we do. It’s frustrating when businesses continue to work in their old ways, and you can see why it’s obvious for them to do that, but if only they knew how to fix the problems – we know how to do that. That’s what drives me to create software. How would you encourage a school leaver to get involved in your industry? What are their options? We’ve recently started hiring apprentices, and the government has been encouraging businesses to offer apprenticeships, which is a really good thing. Microsoft, Rackspace and many other big organisations are investing in apprentices; it’s a great way to bring young talent into the industry. One of the nice things about it is that generally, people as a whole are valuing capability and intellect over and above qualifications. Some of our star employees started out as young people walking in with low qualifications, but they’ve been able to prove themselves to us. The industry is becoming more accepting of talent above everything else. For school leavers, I’d say as much as anything you need to have the tenacity to do the leg work and get your foot in the door. People are generally quite accepting, and I’m personally very supportive of these schemes. What part of your job do you find the most challenging? I think actually what I find most challenging is also what I find most enjoyable about my job. 18 | February 2018

Mat actually edited the end sequence of Teletubbies as part of his school work experience.

“For school leavers, I’d say as much as anything you need to have the tenacity to do the leg work and get your foot in the door.”

I’m essentially wearing two hats; as CEO and CTO, I’m ultimately responsible for running the business. But then I also have a very strong focus on the technology leadership of what we do in this company. Juggling all of the conflicting priorities on a daily basis is what’s challenging, and making sure that I’m covering everything in a wide range of roles. But then it also makes it exciting – I have no idea what I’m going to be doing each day, and that sense of enjoyment really makes it worthwhile. What are your company’s aims for the next 12 months? We grew 350% last year; we’re growing dramatically as a business and we plan to continue. We’re working very closely with major global partners, such as Microsoft, Citrix, Computacenter, Fujitsu, CSC, etc. Through working with our partners, there’s a whole section of the industry that is stuck running legacy systems, and together we want to help as much of the industry as we can to resolve this as quickly and efficiently as possible. We want to remove the burden of the legacy systems they’re stuck with. What are your hobbies/interests outside work? When I’m not working, my priority is spending time with my family; that’s very important to me. In terms of personal interests, I absolutely love motor sports, I’m a real petrol head! Cars, motor racing; I love Formula 1, anything related to that. I try to find any opportunity I can to involve cars in my work. I particularly love the fact that we have a large motor manufacturer as a client – we get to go and see their factories and how they build their cars. Even Formula 1 teams run legacy systems, and we can help them. I try to cross over my hobbies and my work whenever I can!

If you could travel back to any time period in history, which would it be and why? It would have to be the dawn of the computing era, so being born about 20-30 years earlier. I wish that I was there at the beginning of it, when computing really started to go mainstream. If I was, I hope that I would have ended up in the same industry, because there was so much opportunity to do new things in the pioneering days. Although we’re still in early days now and there are new things all the time, it would have been exciting to be there right at the beginning, where it all started. Can you remember what you wanted to be as a child? I wanted to be in video production at one point when I was at school, that’s what I ended up doing for work experience. I actually got to edit the closing sequence of the Teletubbies where they all jump into the hill. I suppose you could say it was that that led me into IT! If you could invite three guests, dead or alive, from history to dinner, who would they be and why? I’d love to have dinner with Bill Gates. He’s certainly someone who’s made a tremendous impact on our industry, so I think it would be amazing to meet him in person. I’m a great fan of Elon Musk as well, I think he’d be great to have an interesting dinner conversation with. As for a third, I’m not sure. Maybe Alan Turing, he would definitely be an interesting guest! He was there where it all began, really, so that would be incredible.

case study

The 90-Day data centre challenge Amdocs and Eaton work together to build and open a new site in less than three months.


or data centre operators, speed isn’t just a server issue, often it’s a vital factor in getting new sites operational. That was the case for Amdocs – a global software and services provider that operates data centres in 85 countries – which had less than 90 days to close an existing site and move itself, and all its customers, into a new purpose-built facility. Amdocs is a global organisation, with over 25,000 employees, comprising some of the world’s most successful wireless, wireline, broadband, cable and satellite, mobile and financial services providers. It provides a range of technologies that help drive its customers’ digital and network transformation, such as cloud, IT modernisation and DevOp,

20 | February 2018

and its data centres are a critical component of its service. What all its customers have in common is their high expectation for the integrity and availability of its data centres. Amdocs had had a base in the heart of London’s Square Mile, close to major corporations and financial institutions, but took the decision to consolidate its satellite offices into one purposebuilt, modern building. Doing so meant Amdocs could reduce the overheads that come with being in central London and running multiple sites, but more than this, it gave the company the opportunity to build a site that would address all current needs, as well as being future-proofed, and it meant it could design a more efficient data centre and pass those cost savings onto its customers.

A looming deadline Timing was the crunch issue. Amdocs had acquired a new site, all the design and planning work was complete, but the company had less than 90 days to move out of its current site and into the new facility, which was as yet unbuilt, but which had to be fully operational. No ifs, no buts, no half measures – the 90day date was fixed and immovable. For Amdocs, the pressure was to identity partners that could be trusted to hit every deadline throughout the move. Mark Goulding, the customer manager for Amdocs at Eaton said, “This project was all about trust – Amdocs needed to know it was in safe hands and the project would be delivered on time.” Eaton responded to Amdoc’s tender, with its own trusted

case study

partner, Carter Sullivan, which specialises in the design, manufacture and installation of complex UPS systems, high-end data centre solutions and bespoke switchgear. Working together, the two companies put forward a solution using Eaton switchgear and UPS back-up. At the same time, Carter Sullivan coordinated the fit out of the air conditioning system throughout the data centre and a separate tender for the office itself. Mark continued, “We’ve had a long, successful relationship with Eaton, so we knew they understood our capabilities. Likewise, we’ve worked with Carter Sullivan on many builds over the years and knew they’d be up to the task. It made for a very positive and effective working relationship on what was a challenging, short timeframe for a data centre build.”  

A “bang on” solution In the first phase of the installation, Eaton and Carter Sullivan implemented a complete power security solution, based on Amdoc’s design, and built around Eaton’s Uninterruptable Power Supply (UPS), switchgear, transformers, and cold aisle containment. The solution also factored in isolation transformers to ensure electrical isolation for the system and prevent downstream issues from the building’s 4-pole main circuit breaker. For the UPS, Eaton first researched Amdocs predicted data centre loads and proposed using its 93PM UPS range. According to Mark Goulding at Eaton, the 120kw unit is, “bang on their requirement” and the battery system is sized accordingly. The 93PM UPS guarantees power supply for a defined period to critical loads in the event of a power failure. The unit also protects critical loads by filtering out power fluctuations, such as voltage spikes, and

providing continuously highquality power. The design of the 93PM is based on online double conversion technology, which is widely recognised as giving the best possible protection, because it guarantees a clean and reliable power supply regardless of the quality of the grid. Energy loss with the 93PM is minimal, compared to traditional double conversion UPS devices, and it can achieve an efficiency rating of 96.7%. Once the power solutions had been installed, Eaton and Carter Sullivan worked with Amdocs to design a system that would make the data centre as energy efficient as possible. This is based around Eaton’s Variable Module Management System (VMMS), which increases efficiency without compromising reliability. On the physical design, the two companies came up with a concept of two rows comprising 30 42U server cabinets sat in an open area, coupled with a cold aisle containment to provide more effective airflow. This was a completely bespoke design that mixed racks from various manufacturers into one solution and comprised a set of aisle doors with roof panels. In the second phase of the installation, Eaton designed and built a future-proof electrical panel that incorporates all of Amdoc’s requirements for the building infrastructure, IT, generator, and

bypass facilities. It is designed so that Amdocs can avoid interrupting the data centre’s operation during any future expansion. The panel is based on the Eaton xEnergy LV Switch system, which is designed to meet constantly increasing requirements, providing optimum conditions for building infrastructure up to 5,000A. As well as providing the hardware and software for the power security solutions, Eaton also provides Amdocs with ongoing remote monitoring and maintenance services. In the event that there’s ever a power issue, Eaton would have service personnel onsite in under four hours.

Around the world in 90 days

“Timing was the crunch issue. Amdocs had acquired a new site and had less than 90 days to move out of its current site and into the new facility.”

The move was completed within the 90-day timeframe and the modern facility is now operating successfully using the new power management concept. What’s more, the concept is spreading to other Amdocs sites; in fact the company plans to use the data centre design and infrastructure as a model for the development of its other data centre projects around the world. Ronnen Perry, global technical manager and regional facility manager at Amdocs said, “I can honestly say that my experience with this project ranks with the best that I’ve worked with anywhere in the world. The outstanding efforts of Eaton and Carter Sullivan made it possible to complete the Amdocs data centre on schedule and on budget.” The data centre is delivering immediate results and cost savings; in fact it has the lower PUE of any Amdocs data centre worldwide. Looking ahead, Amdocs has recently installed a second UPS for the B-side of the data centre and plans to roll out this approach in its other data centres around the world. February 2018 | 21

product launch

You’ve got the power! Vertiv has launched cost saving and revenue generating new energy control options for data centres. DCN met up with Emiliano Cevenini, vice president of sales and business development, for the AC Power Division of Vertiv, to find out about more about this innovative new energy management initiative.


he meeting took place at the recent Datacloud UK conference in London where many of the top names in the industry had gathered to hear presentations about what the future holds. Emiliano was at the event to deliver details on this exciting and innovative new service being offered by the company, which looks to deliver costs savings and new forms of revenue via optimisation of ‘behind the meter’ energy assets of data centres.

22 | February 2018

Emiliano explains, “The idea of homes gaining energy efficiency savings and even making money through intelligent use of power has seen its profile rise in recent years. Our new service moves the concept into the commercial world of data centres and allows operators to use their UPS systems to not only save money, but make money too.” Emiliano continues, “For many in the Datacloud UK audience my ideas may have come as a bit of a surprise. I proposed that ancillary

grid services can be ‘unlocked’ with existing battery storage in data centres. In our industry, this is something often thought of as a necessary evil in the event of power failure, or more of a safety measure. What people are less aware of are the potential benefits to energy cost optimisation and how this can create additional revenue. “Despite this, I think the audience felt this was an interesting proposal, albeit one that would take time to implement. Interestingly, some in the audience

product launch

saw this point as an opportunity to support the business case for investing in data centres.” Using new products and services from Vertiv, operators can harness their back up power assets to save money by making sure peak savings are delivered, only charging at optimal times for example. However, Vertiv’s system goes further and allows data centres to use the same assets to deliver power back to the grid and generate extra income for the company. Emiliano argues that few data centres are run at 100% capacity much of the time, this aspect of the industry has understandably grown up around the need to be super reliable and efficient. However, Vertiv argues that improvements in technology and with the addition of its new services, business advantages of energy efficiency and revenue generation can be achieved without ever compromising the performance of the data centre. The service works on a strict priority system where performance and reliability always come first, but when the cost savings and energy push back can safely occur, it does. “The incremental business case is really strong,” says Emiliano. “Data centres already have the equipment to make this work, with a few simple to fit additions from us and the software we provide, why not make the best of every asset you have?” The services the company plans to offer come in a whole range of flavours says Emiliano. Companies can have complete control of their system and negotiate with the grid on the return for their power creation. Or right at the other end of the scale, Vertiv will come and fit the system, operate the software, deal with the energy company and simply deliver the energy savings aspect and a cut of any income from power generation for the grid.

Vertiv’s predictions of what customers can earn each year over a fixed term contract of five years

“People aren’t aware of the potential benefits to energy cost optimisation and how this can create additional revenue.”

The company’s software partner with these services is Upside Energy, an aggregator with the experience to deliver the visibility and control Vertiv expect customers will need and want. Emiliano says, “This is definitely not a one size fits all offering. Each system will be created specifically for the company and data centre or centres involved.” Vertiv also says that its system is set up to deliver energy speedily to the grid when it needs it most, which delivers the highest returns as speed is what the grid values and rewards the most. The company also argues that speed is delivered in another aspect of the service in terms of a fast return on ROI. Check out the graph above to see what Vertiv predicts customers can earn each year over a fixed term contract of five years. Although primarily aimed at data centres in its launch phase, Emiliano says that the same benefits can be delivered in many sectors who have similar assets and are looking to tap into the same benefits. He also underlines that

the new system works well for all sorts of data centre set ups, those with multiple sites and colocation operations too. Emiliano adds, “The system is based on the latest machine learning and software, so it’s low intensity not only in terms of investment, but manpower as well. No large team of technicians is needed to run the system. Each customer also gets a personalised dashboard so they can see exactly what is going on and what the system is doing any time they want. The system can produce daily, weekly, monthly reports, whatever the customer needs.” The first systems will be coming online in Q1 of this year and as well as delivering all the benefits to its first customers, Vertiv will use the projects as case studies to show what can be done and the advantages offered. Ultimately, Vertiv believes the technology and culture now exists to create a strong demand for this type of service and is encouraging potential customers to get in touch. February 2018 | 23





The Event Of The Year. May 9th - 10th 2018

Ready in good stead for 2018, EI Live! is back and



ready to blow your smart home socks off. There’ll be a



brand new format for exhibitors, a delicious networking



dinner where the winners of the Smart Building Awards


will be announced, and a jam-packed learning zone


with a super schedule of spectacular speakers.

Book your stand now, your competitor has!

w w i t t e r. co m /e i l i ve s h ow Untitled-13 2

21/12/2017 15:56

Product Categories



Gala Awards Dinner - May 9 - Receive your award - Network with industry leaders - Live entertainment - Three course meal


Projector screen of the year Projector of the year Bracket/rack or mounting product of the year Loudspeaker of the year Cable solution of the year Matrix/signal distribution product of the year Multi-room/zone music solution of the year Dedicated touchscreen of the year TV/Display of the year Integrated TV/Display of the year

Project Categories -

Best cinema project Best commercial integration project Best whole house project Multi-dwelling unit (MDU) project of the year

Company Categories -

Distributor of the year Manufacturer of the year Training initiative of the year

People Categories -

Sales person of the year Special recognition award

w w i t t e r. co m /e i l i ve s h ow Untitled-13 3

21/12/2017 15:56

Infrastructure Management

East to West Is East to West your data centre’s danger zone? Nathaniel Wallis, security specialist at Axial Systems discusses how to keep lateral data traffic protected.


etworking and security were once separate IT methodologies. However, as operational technology evolves, the two are beginning to overlap – and it is becoming increasingly necessary to think about the network as a security enforcement platform. Traditionally, networks were constructed on standard building blocks such as switches and routers and security solutions; perimeter firewalls or intrusion prevention

26 | February 2018

systems were applied afterwards. IT security departments typically focused on the delivery of timehonoured threat detection methods and perimeter-based security defence mechanisms, as well as incident response and remediation, while networking teams spent time on issues around latency, reliability and bandwidth. Yet today, the move to hybrid networks means traditional approaches cannot cope with the scale, automation requirements or

the rate of change. Most modern networks now combine the use of physical data centres with the virtualised, such as cloud platforms and containerised environments. All of them need at least the same level of security. By basing an approach on security functions like policy enforcement or micro-segmentation rather than point products, and ensuring all the functions integrate into a framework, providers and their customers can deliver a holistic

Infrastructure Management approach to security that ensures the whole is greater than the sum of the parts, irrespective of where the data or application resides. East-west traffic, which is effectively how applications and systems within the data centre ‘talk’ to each other, is a neglected security area within many organisations. North-south traffic – the ‘in’ and the ‘out ‘– is protected; but it is often what is happening internally that provides the biggest threat. More often than not, networks within a data centre have been built ‘on-the-fly’. Consequently, the complicated design needs some understanding, as massive trust relationships have been set up or developed. If someone manages to get into a data centre, or legitimate traffic becomes malicious within the centre, they will use these trust relationships to get to wherever they want. Switches will have administrative credentials which allow them to move onto other routers, which then move onto servers and applications and the malware spirals out of control. It’s important to have an upto-date map of infrastructure so the network can become better understood and, ultimately, made more secure. Micro-segmenting is then critical to properly control traffic flows within applications and reduce the attack footprint by ensuring only compliant flows are allowed - and to contain threats in case of a breach. If a network is segmented down to process level and server A can talk to server B but no other, network operators can see that anything else is a violation. The next step is to apply a visual security delivery layer on top of these microsegments and across the network. This will give all inline tools the ability to feed packets of data in real-time, to be stored for replay later or to be fed into analytics

“East-west traffic within the data centre is a neglected security area within many organisations.”

engines. This gives security operator centres (SOCs) a better idea of how security is performing. Additional layers can be applied on top to regulate access with privileged access management baked into all the end sites. This gives operators far more control, as what can talk to what when data is travelling laterally. It also enables the business to detect active breaches within the network, confine them to a secure location, and then easily prioritise and investigate them in order to pinpoint compromised assets for accelerated mitigation and remediation. Businesses must be able to effectively protect their data centres and servers at the periphery, but it’s equally critical that if an issue does occur in the

core, they are made aware of the potential dangers across the whole of the estate. The latest solutions give operators this information via a ‘single pane of glass’. In other words, the security operating centre can see in real-time exactly what is happening down to process level. By abstracting security policy creation to a centralised point and automating it, businesses can utilise network devices as dynamic security policy enforcers, right down to the point of connection. Embedding security into the network reduces operational overhead, increases visibility and helps generate meaningful intelligence. By standardising security policy, there are fewer errors and less time spent troubleshooting. February 2018 | 27

Infrastructure Management

Keep the Pace Faisal H. Usmani, business development and strategy lead, communications at Cyient Europe, explains how SDN can help keep pace with ever-changing network demands.


ervice providers are under mounting pressure to constantly enhance networks and keep pace with persistent growth in capacity demands. To do so, many regularly go out to market seeking new advancements in networking technology and associated next generation management tools, assessing what they can apply to improve their networks. At a basic level, service providers can opt to meet elevated capacity demands through the addition of cabling and active equipment. However, consumers are increasingly mobile, demanding ‘always-on’ connectivity and greater speeds. The number of connected services in the home continues to grow; individuals can now be linked to utility providers, energy giants, insurance companies and goods manufacturers through their network. While this dramatically overhauls the consumer experience in the home, it places huge pressure on the reliability of the infrastructure behind these services.

28 | February 2018

Coping with growing capacity demands Service providers face a dilemma: Either add capacity to all impacted nodes to meet anticipated network demand, or ‘move’ bandwidth around the network. The former pushes them to create ‘over capacity’, where capacity at each node is more than necessary, resulting in surplus capacity across the network. The latter is a more complex process, requiring a comprehensive understanding of a network’s recent capacity trends. More issues may arise in network intensive businesses and mature domestic consumers, both of whom are moving their data and IT services into the cloud. This requires service providers to manage data more efficiently to ensure their networking priorities meet agreed business Service Levels Agreements (SLAs), and they provide high customer satisfaction to their demanding user community. One mooted solution, the ‘self-optimisation’ of networks, enables the management of network capacity, but tends to be

reactive rather than proactive. It can create an environment within a complex network where capacity is in a constant state of flux, as it attempts to manage real-time demands by interacting with other active network equipment. By recognising the shifting demands placed on network capacity by timeframe and location, providers can dramatically improve their ability to efficiently manage capacity, expanding it to match actual demands, rather than more generic needs. But this requires a detailed routing framework known as the Software Defined Networking (SDN) methodology. The network can be dynamically and automatically configured through SDN and respond to changing conditions and demands across the network.

Why we need SDN Successful deployments of SDN have occurred in data centres and for service providers’ business routing. It revolves around the concept of a routing table where

Infrastructure Management specific services can be routed based on pre-defined parameters. For instance, an organisation’s cloud-based Customer Relationship Management (CRM) service could be routed over a faster network route during normal office hours (a period of higher demand), and then revert to a standard network after this time. The organisation is assured their mission-critical systems can operate at high speeds at peak times, and the service provider can offer a premium service at a higher cost. The service provider presents one service with two or more possible routing table entries here, and the option to switch the entries between one another based on the relevant time parameter (i.e. between peak and off-peak). In the initial stages, both situations are easy to manage, as there is a limited and readily controlled set of network parameters. The concept becomes more complex however, when switching routes for several services within a larger and more complex network topology.

Integrating analytics Many service providers now use network analytics to model and accurately contrast the capacity of their network with the amount used at node level within specific timeframes. The resultant analytics models are used as the framework for any SDN implementation, because they give service providers and app developers the insight and ability to develop alternative routing models which match everchanging network demands. In future, there is undoubtedly a role for the integration of realtime analytics and SDN, which allows the majority of network optimisation to be performed in real-time. Of course, this would require a pre-determined set of boundaries for such optimisation

to prevent over-compensation – for example, in the event of an outage or a network burst.

Greater capacity and more control The introduction of SDN into service providers’ networks offers new levels of flexible capacity, provided they model network capacity and develop appropriate SDN scenarios to ensure the ongoing integrity of capacity. They can then refine their networks and facilitate the implementation of network scenarios in anticipation of both scheduled events (such as major sporting or entertainment events) and unscheduled events, and put in place specific network configurations to respond to these situations.

Reducing network complexity SDN also reduces network complexity by simplifying network management. By abstracting the complex array of intelligence that resides in the network and consolidating it into an SDN controller – you are left with one centralised application that acts as the strategic point for controlling the switches and routers in a network. From there, it is possible to directly control multiple nodes from one source, which in turn makes it easier to route traffic around the network, thanks to a reduction in the number of elements in a network. This means more embedded intelligence can be taken out of the network, analysed, and applied to facilitate more efficient use of capacity on a broader scale.

the introduction of different components (and their services) into the network simple through a series of controllers. Providers can quickly virtualise and set up a service at any point in the network as an IoT service, rather than setting it up in the traditional isolated manner, quickly integrating it with overall network operations. As IoT services evolve and demands for connectivity grow, SDN will continue to act as an enabler behind the scenes. The effectiveness of M2M/IoT services will improve within the core network, simplifying connectivity. Overall, SDN is dramatically changing the way we manage our networks, especially as they become more complex, and SDN deployments will continue to evolve in accordance with the development of their underlying networks. Moving forward, it will be used in ways we can’t even imagine yet, as the tools mature and organisations, especially those in the cloud, adapt applications to take advantage of SDN and the opportunities it brings.

Giving growth to the IoT In the Internet of Things (IoT) ecosystem, it’s critical that service providers can manage connectivity dynamically. SDN helps by making February 2018 | 29

Infrastructure Management

Successful Security Mark Gaydos, chief marketing officer at Nlyte Software, breaks down what it takes to keep your data secure and explains why a multi-faceted approach is the key to security success.


he data centre runs the show. No data centre equals no data or IT-dependant services. Whilst there are a host of vitally important issues that all impact on uptime and the quality of service delivered, security is one that other departments often revel in. Larger enterprises might have a security operations centre, dedicated team, or the IT team might take the lead over the data centre team or other line of business departments within the organisation.

30 | February 2018

Yet, often security is one of a number of competing priorities – any of which could be considered top, due to just one failure rendering an organisation unable to deliver. Demonstrating the range of priorities at play, Computer Economic’s research, IT Spending and Staffing Benchmarks study for 2017/2018, describes how 70% of IT organisations reported increased spending on security and privacy. Yet, at the same time, 67% of respondents reported increased

spending on cloud applications, and 52% and 51% reported increases in spending on cloud infrastructure and business intelligence, Big Data, and data warehousing. These are all big bets, and each could cause either success or failure if not done right. So, when the data centre manager talks to the board, what should they stress as the most important element of keeping data and data operations secure? All of them. Let’s break it down.

Infrastructure Management • Cooling: Overheating components are components depreciating in quality and lifespan. Ensure that the equipment that forms the jewel in the crown of the data centre assets is well cared for by providing a secure environment for its entire lifespan. • Rack space: Getting capacity right in a world without a level of intelligence to aid in fine tuning IT asset optimisation is a losing game. If capacity gets overloaded the service suffers. Overprovisioning can be very wasteful in terms of costly, unused space, plus the power and cooling. If capacity isn’t utilised properly then the business suffers, and in the long term, the service may then come to suffer too.

Challenges What do we mean by security? That your data and the services running it are within your organisation’s control and no others. But also, that your data services are maintained and operational under your control too. Overall, cybersecurity, power, asset management and capacity planning – they all interrelate and impact on the same fundamental end-results: Is the data centre delivering as expected? Yes or no. On or off. High quality or poor quality? • First off: Power. With no power your data might be totally secure from outside interference or theft, but you don’t have the use of it. If you’re not able to use your own assets that are not in your control – that’s not ‘secure.’ That’s unusable.

Meaningful security can only come from a high level of intelligence, and the appropriate levels of insights from that intelligence. In fact, the data centre is constantly providing reams of intelligence – but sorting, understanding, and using it are a whole new ball game.

The right tools for the job Today’s information technology business leaders require command and control insight on all operations that support the data centres housing their critical business infrastructure. They need to monitor, coordinate and optimise multiple interconnected systems to ensure that their data centre operations are running at appropriate levels to prevent any failures from adverse externality. The solution to this is the deployment of sophisticated data centre management systems that address the myriad of issues associated with data centre operations. These data centre information management systems monitor power, cooling,

“The data centre is constantly providing reams of intelligence – but sorting, understanding, and using it are a whole new ball game.”

computing resources, security and environmental variables to enable personnel to efficiently maintain the high performance required of all subsystems in the data centre to work together seamlessly. These systems have come to the market at a time when the rapid growth of large commercial data centres have made it imperative to adopt more efficient management techniques. Traditional approaches to data centre management and facility monitoring in earlier implementations, required manual intervention and collaboration between technical teams. These were inefficient, leading to poor utilisation of resources – and eventually, inefficient data centre operations. Modern data centre infrastructure management solutions have evolved to automate a variety of the important tasks to centre operations. The software manages and shows all physical assets in one place, providing the capability to automate commissioning tasks, capacity planning and various formerly manual actions. This level of software intelligence enables data centres to operate at exceptionally high levels of efficiency. The Holy Grail for secure service operations could be summed up in monitoring, performance management, and real-time reporting. Monitoring enables an effective call-to-action. Where there is a pending issue, due warning can be raised, allowing proactive steps to be taken. Additionally, continuous logging of alerts allows operators to review and improve planning around infrastructure, and improve disaster recovery. Security is an all-encompassing concept when thought of in terms of security of service. If anyone ever says that security is not their job, please correct them. The data centre and the entire business depends on a securely delivered service. February 2018 | 31

Infrastructure Management

Knowledge is power Darren Watkins, managing director at Virtus Data Centres, discusses why a lack of understanding when it comes to DCIM could be preventing your business from reaping the rewards.


ata Centre Infrastructure Management (DCIM), which brings facilities and IT management software together within the data centre, is no longer a new concept. Cloud computing, heat densities, consolidation of data centres and power consumption are all driving the global market for DCIM. Add to this an increasing focus on the optimisation of operational costs, enterprise migration into private clouds and data centre virtualisation, and it’s perhaps no surprise that the need for DCIM is escalating quickly. But, while we’ve seen demand for DCIM increasing steadily, there are still millions of racks that have not yet used the technology,

32 | February 2018

meaning that it’s simply not reaching its potential. So, why has adoption been slow? And what can the industry do to ensure businesses are benefiting from the rich analytics that DCIM offers?

What’s the problem? Tech evangelists extolling the virtues of DCIM in theory, whilst getting customers to use it in practice, are two very different beasts. Although it’s a fairly straightforward technology, there is a still an awareness problem amongst IT departments. Simply put, if customers don’t know what the software is and how it can save on their power usage and overall costs, there’s a risk it will be

overlooked and under-utilised, as is evident at the moment. Even if the benefits of DCIM are known, barriers to adoption still remain. This is particularly true for older generation data centres, not originally designed for DCIM, which struggle to install it. The ability to flex power and usage requirements up and down come into direct conflict with many commercial data centre models, which rely on long-term, costly, and inflexible contracts to safeguard their operations. This means that having DCIM under these conditions becomes ineffective unless customers have the power to amend their contracted usage commitments to reflect actual real-time usage.

Infrastructure Management

The value for providers

Issues over cost are further complicated by the fact that DCIM spans both IT and facilities - two areas which don’t normally overlap. This has been known to create disagreements, for example, over whose budget should be used to pay for DCIM, with neither side stepping up to take responsibility.

So, what’s the answer? There are two issues for the industry to solve here. The first is a fundamental one - understanding what DCIM is (not a single piece of software, but a software category consisting of two core building blocks: DCIM monitoring, and IT Asset Management (ITAM)) and determining, as a business, who takes responsibility for the tools. The second is more complex and requires the IT industry to communicate better with its customers. We need to ensure businesses fully understand the benefits of the technology, and why it’s imperative to their data centre strategy. DCIM software monitors all the critical systems in a data centre in real time so users know how to optimise the use of space, power, cooling, and network capacity. What’s more, DCIM monitoring generates an alarm when something is headed for disaster before the catastrophe happens, so changes can be made to reverse the risk. While it’s important for data centre operators to gain access to what’s happening in their facility in real-time, DCIM can also help with future planning. When data centre managers know what equipment they currently have, how much power it’s drawing, and where that power is coming from, amongst other vital information, they’ll be able to determine how much more equipment their facility can handle. And by optimising capacity, they can delay, or altogether eliminate, the need for constructing a new facility.

The collaboration story

“We need to ensure businesses fully understand the benefits of the technology, and why it’s imperative to their data centre strategy.”

So, DCIM is important. But its benefits don’t just apply to one organisation, but for entire industries. In a digital economy, we know that businesses are increasingly relying on their data centres - and so, the space, power and cooling demands placed on them have also increased exponentially. To tackle this global problem, data centre providers and customers need to collaborate and examine how they can work together to increase efficiencies – reducing spiralling energy consumption and cost. DCIM is an important way of facilitating this collaboration. Traditionally, data centre providers have been quick to highlight if customers need to buy more capacity, but not as quick to advise when to scale down requirements. DCIM creates this visibility and puts the control firmly back into the power of the customer – giving them greater visibility into their daily usage and allowing them to manage their capacity in real-time and act on the analytics. Of course, this is only useful if they have the flexibility to scale down their contracts in the event that they’re not using the initially agreed amounts. Several innovative providers are already offering this capability and actually using DCIM to create new commercial models, which makes choosing the right intelligent data centre provider more critical than ever.

However, if these hurdles are overcome, we believe that DCIM will cement its place as an essential part of the intelligent data centre. With more customers come more demands, and they will expect their providers to offer the most advanced solutions available. But then where’s the value for providers? If providers are offering an effective DCIM solution, and not charging for it, then not only will this attract CIOs, but also CFOs – both of which need to be onside when making important IT decisions. CIOs have the expertise, but they need CFOs onside to believe in and finance their decisions, making the CFO just as vital in some cases. Supplying visibility also creates value to providers – it creates trust and strengthens the customer relationship. Rather than this perceived fear that DCIM reduces the control of operators, it is simply a realigning of focus – giving the customer access to what is rightfully theirs. It’s their data, their power consumption, why shouldn’t they be allowed to both monitor and control it? Through the use of DCIM, operators will be able to ensure that they only use the amount of power absolutely required by their customers, and the customer will be able to scrutinise this usage. In order to fully benefit from the use of DCIM, data centre providers will need to allow their customers to act on the results and scale their usage up or down accordingly. So, whilst we believe that there are still hurdles to adoption, the future is bright for DCIM. It offers opportunities for both data centre providers and their customers to get closer to their goals of increased efficiencies and decreased costs. The industry has a lot to gain by embracing DCIM – but it needs to act and collaborate with customers themselves to achieve this. February 2018 | 33

cloud strategy

Clash of the Clouds Andrew Brinded, general manager & senior regional director of Western Europe & Sub-Saharan Africa at Nutanix, discusses the pros, cons and considerations when it comes to choosing public or private cloud.


s organisations look to develop and define their cloud strategies, two predominant models have come to the fore - the public or shared cloud and that of the private cloud. Despite all the attention that cloud infrastructure has garnered, the definitions of these models remain quite fluid. However, the staunch supporters of each model remain resolute in their backing of one or the other. The resulting ‘clash of the clouds’ as to which model will reign supreme can be intimidating for

34 | February 2018

enterprise IT professionals seeking to develop the right cloud strategy for their business. While the arguments rage on, it is essential that IT professionals do not lose sight of the bigger issue here. The basic model of the cloud - delivering IT as a service - is going nowhere. Cloud is here to stay, but we shouldn’t view the cloud as a destination; it should instead be considered an ongoing journey, that is rightfully focused on achieving improved business results, thanks to the ease of use, self-service, scalability and agility that cloud

affords. Such a shift in mind-set will helpfully move IT’s focus towards applications, not simply the underlying infrastructure.

Backing the best of both worlds For organisations looking to evaluate the competing cloud models, there is a third way. There are indeed unique benefits - and drawbacks - to both cloud models, so why not choose both? Consider building an onpremise or hosted private cloud

cloud strategy

architectures to integrate compute, storage, networking, security, management resources and virtualisation into a highly scalable system that automates provisioning, while delivering user self-service. Such clouds distribute everything, are resilient, and importantly, enable significant automation, which in turn allows scalability. A private cloud should look and act just like the public cloud, and while traditional infrastructure cannot deliver this level of functionality, softwaredefined infrastructure can.

To buy or to rent, that is the question

infrastructure, while also readying your organisation to use public cloud services. But do take note; if you aim to build a true private cloud, you’ll need to move away from relying on legacy-based data centre infrastructure. One common misconception is that private cloud infrastructure simply means virtualisation; while support for virtualisation is an important aspect of a private cloud, it is only one aspect. Unlike traditional architecture, a true private cloud is one that utilises software-defined

After building both a private cloud for your business and setting up your organisation to use the public cloud, you’ll then need to work out which workloads belong where. One excellent way to determine this is to consider for each specific application or use case if you’d prefer to rent or buy the cloud service. By way of example, if you go away for a city break for a few days, you’d never consider buying a car, you’d opt to rent. On the flip side, in most cases, it would be uneconomical or impractical to hire the car you use for the daily commute and shopping, as buying a car would limit costs over time. It’s very much the same situation in the world of cloud infrastructure, with public cloud services delivering SaaS being generally better for ‘rental’ workloads, while private clouds are better for ‘owning’ workloads. The public cloud is well suited for unpredictable, highly variable, short-term workloads, as you only pay for what you use. But for better cost savings in the long-term, a private cloud whereby you own your infrastructure is best suited to those more predictable and established workloads.

Managing mobility

“We shouldn’t view the cloud as a destination; it should instead be considered an ongoing journey.”

As mentioned earlier, we must be mindful that cloud is a journey, with mobility at its core. Of course, a previously unpredictable, shortterm workload can become a more predictable, long-term workload. It would therefore follow that the workload should be considered for moving to a private cloud. Conversely, you might consider moving a previously predictable, long-term workload to a public cloud service. One huge challenge faced by organisations is truly appreciating that workloads and cloud services are constantly changing. Organisations that don’t appreciate this will soon find themselves spending significant effort and time moving workloads between clouds. To counter this issue, it is essential to ensure that your private cloud has an architecture that includes strong support for public cloud services. In practice, this means that when you move a workload between clouds, you do not have to make application changes; you can easily preserve any application state, configuration, and environment requirements and you can translate a workload’s SLAs to its new environment. Such mobility will allow you to quickly and easily move workloads between clouds as your business and workloads change. With both private and public clouds at your disposal, a clear understanding of how to determine which to use for which workload - and an architecture that enables workload mobility your IT department will be able to economically deliver end-users the ease-of-use, self-service, scalability and agility that they demand. While the ‘clash of the clouds’ rumbles on, organisations that will succeed in today’s dynamic, digital business environment are rising above this clash to a place where they can embrace both clouds as needed. February 2018 | 35

data centre recovery

Call the DR In the face of increasing cyber attacks and in particular the threat posed by ransomware, RTOs and RPOs will come under increased scrutiny this year, says Christophe Bertrand, VP product marketing at Arcserve.


he most effective way to prove an asset’s value is to remove it. Recent reports of outages and ransomware attacks have demonstrated that an increasing number of business-critical processes depend on operational data. The Osterman Research survey, ‘Understanding the Depth of the Global Ransomware Problem,’ found that a quarter of organisations that had their data held to ransom had to stop business immediately, while 43% had lost revenue. It is estimated that 64% of data and applications fall into mission and business-critical tiers. The outage suffered by British Airways’s UK data centre in May 2017 grounded flights, impacted 75,000 passengers and was estimated to cost the airline £150 million in compensation.

36 | February 2018

Growing pressure on RTOs and RPOs Due to the business-critical nature of data, organisations are developing much higher expectations of data centre operators’ abilities to protect and recover their data. As a result, Recovery Point Objectives (RPOs) and Recovery Time Objectives (RTOs) will come under increased scrutiny over the next 12 months. 2018 could be the year of zero tolerance when it comes to data centre operators’ recovery of data. The latest figures from Osterman Research suggest that RTOs and RPOs will halve in 2018, with almost 50% of C-level executives reporting that, in the event of an outage, no data loss could be tolerated from their most critical applications.

Faced by the threat of business interruption in many forms, the ability to recover access to relevant data is moving up the corporate agenda. In its report, ‘Understanding the Critical Role of Disaster Recovery’, Osterman Research found that although organisations will continue to have high expectations of the speed of data recovery, there will be a marked difference in attitudes, as firms also realise that the relevance of their data (RPOs) is just as important as the time taken to regain access to their information assets.

The importance of disaster recovery When UKFast suffered a power failure on December 12 2017, some customers were disconnected from

data centre recovery

their data for 11 hours. However, UKFast reported that customers that had put in place adequate disaster recovery (DR) plans were not affected. A growing proportion of decision-makers are recognising the benefits of DR. Two thirds of firms consider DR for critical systems to be ‘absolutely essential’ to their business in 2018.

The growing ransomware threat Last year, the most shocking ransomware incident was the WannaCry attack upon the NHS, which disrupted more than a third of trusts and led to the cancellation of more than 6,900 patient appointments, according to National Audit Office figures. WannaCry spread to more than 150 countries before being brought to a halt. While healthcare and financial sectors have been the most heavily impacted by ransomware, other sectors are starting to be targeted by cybercriminals. In June 2017, Maersk, which ships one in seven containers worldwide, suffered a global outage after its systems were infected by the Petya ransomware. Unfortunately, this threat is expected to grow. Cybersecurity Ventures has predicted that ransomware attacks on businesses will increase in frequency to every 14 seconds by the end of 2019, up from every 40 seconds in 2017. In an ideal world, businesses would be able to prevent ransomware before machines are infected. However, given the value placed on operational data, most realise that the best way to recover from an attack is to have recent backups that can be quickly restored. Osterman Research found that 54% of UK organisations it surveyed had implemented regular cloud-based

backups, while 52% regularly carried out backups on-premise to restore data to a known good state. For a long time DR has been a simple risk vs. economics equation, with businesses performing a balancing act between costs and the ability to recover quickly. Yet, last year there was a marked shift, with more than a quarter of firms surveyed willing to accept higher costs for data recovery in five minutes or less. This is because 95% of firms surveyed by Osterman Research suggested that they could tolerate no or minimal data loss from primary applications in the workplace. To achieve targeted RPOs and RTOs enterprises are increasingly turning to cloud-based DR, with 31% of firms believing this is a better way to tackle ransomware as it reduces the risk of DR infrastructure also becoming infected. Indeed, some 64% of businesses either already have or plan to deploy hybrid DR solutions this year, while nearly one fifth have already moved to cloudonly solutions.

“To achieve targeted RPOs and RTOs enterprises are increasingly turning to cloud-based disaster recovery.”

Georedundancy in the cloud also enables smaller organisations, which might previously have been unable to afford DR, to take advantage of fully-equipped solutions that will enable them to recover quickly from accidental outages or criminal attacks. Based on the value of data to mission-critical operations, it is anticipated that managed service providers will be driven to develop data recovery solutions that can enable near-zero RTOs and RPOs at costs that are affordable to enterprises and SMEs. As part of this shift, we predict that organisations will continue to migrate from onpremise disaster recovery to cloud-based infrastructures, using a combination of physical and service-based solutions. This would seem to be underlined in the latest Disaster Recovery as a Service Magic Quadrant report from Gartner where it predicts that the current market value of $2.02 billion will grow by 85% in 2021.

February 2018 | 37


Shrink-Wrapped Nick Sacke, head of IoT and products at Comms365 discusses why he believes shrink-wrapped services are the key to realising the vision of IoT at scale.


nalysts continue to talk up the potential of the IoT market – but it is hard to see how 21 billion devices will be connected by 2020 given the current, highly bespoke IoT deployment model. There is simply no way a handful of, albeit large, suppliers delivering highly bespoke solutions can realise the full potential of IoT. Today, the sheer complexity of these long drawn out, custom-built IoT developments are massively constraining adoption. Even the much-vaunted ‘smart city’ projects are losing momentum, with many applications stalled at proof of concept. The potential of IoT will never be achieved until we have ubiquitous delivery – and that means developing solutions that can be deployed to companies of every size by a strong reseller channel. From smart parking

38 | February 2018

to smart warehousing, a new generation of channel friendly ‘IoT as a Service’ solutions are set to transform IoT adoption.

Channel reluctance To date, the channel has had little input into the evolution or adoption of IoT – and for good reason. How can a reseller possibly embrace the full, end to end implementation required? From diverse sensor technologies, a lack of network standardisation, large up front costs, and the need for multiple vendors for just one solution, achieving an end to end IoT deployment has been deemed to be too big, too complicated and too high risk. Instead, resellers that have been keen to become the first to provide IoT solutions to their customers have rebadged the

IoT services from mobile carriers. Unfortunately, not only do these services do little to build on legacy Machine to Machine (M2M) offerings, they don’t maximise the true value of the technology. Furthermore, the major carriers are in many cases working directly with enterprises on the largest and most lucrative M2M deployments. So where does this leave resellers in a congested, price driven market? It is no wonder that IoT has yet to truly take off with resellers. Where is the revenue stream? What is the value of investing in IoT knowledge and expertise when operators are taking by far the biggest slice of the pie? And where does that leave the vision of a connected world; of millions of sensors providing data that can be captured and analysed to drive new efficiencies, cut costs and uncover revenue streams?


As a Service If the benefits of IoT are to be truly realised, the model has to change. This is where the channel comes in, as the key to offering wide scale, successful deployment to businesses of every size – channel IoT adoption means developing a better, faster and more relevant IoT model. IoT as a Service or shrink-wrapped IoT is different on every level. Based on a proven market need – such as smart parking or smart warehouse operations – IoT as a Service requires limited customisation. A complete IoT smart parking solution, for example, will include sensors, network, data storage, analytics and visualisation, but will be offered as a service that can be purchased as a proof of concept, a first case deployment or even as a full-scale solution. For a reseller, there is nothing to do other than install it – and even that can be outsourced if required. What is in it for the channel? With this approach, a reseller can become a trusted IoT advisor for the end customer – in a similar manner to the way resellers have embraced unified communications. By adding IoT knowledge and expertise, plus access to several shrink-wrapped IoT solutions, a reseller can take its existing strength in understanding a customer’s challenges and create a business case. In addition to advocating the service, a reseller can opt to project manage the whole process, even support the installation if required. Essentially, the ‘as a Service’ approach provides resellers with a chance to realise customers’ IoT objectives – but without having to undertake any complex, high risk, bespoke development.

Complete business solution It is important to note that a shrink-wrapped IoT solution is not just a proof of concept (PoC). There are a number of kits available that enable a simple IoT demonstration – a couple of sensors, a gateway with a wireless protocol to connect to the sensors, and an application. But while this sounds like IoT as a Service, it is not deployable in a live setting – it is just a bench demo that can be used to prove the concept and validate the requirement. The shrink-wrapped IoT PoC kits allow the end user to test IoT for real in several application areas to determine the validity of their business case. True IoT as a Service is different. With deployable sensors and viable network connectivity, for example, the fast expanding LPWAN networking options that can connect a high volume of sensors, as well as the data storage, analytics and visualisation components. Every aspect of this end to end, shrink-wrapped model is designed to scale to thousands, even millions of highly dispersed devices. Plus, it is not simply a conceptual exercise, but a deliverable application based on a proven business requirement – a specific solution that is ready to deploy and provide immediate value. For example, a smart parking system can be used not only to improve traffic management but also reveal new revenue streams. One car park in Cambridge, for example, unveiled significant missed parking revenue due to customers not paying the minimum one-hour fee when only making a quick stop. IoT informed analytics resulted in the creation of lower charges for very short stays – generating £500,000 in additional revenue.

IoT ecosystem

“IoT will only become mainstream if the ‘as a Service’ model is adopted.”

Of course, the sheer logistics of implementing and supporting IoT on mass is daunting for any organisation – which is why the IoT as a Service model relies on an ecosystem of expert companies. Just as the LORA Alliance is a group of small and large companies successfully working together to deliver LPWANs globally, the IoT ecosystem will drive industry standards for networks, sensors and best practice deployment. The new generation of IoT as a Service providers will use this ecosystem to ensure resellers have full and complete access to the expertise and capacity required – from sensor manufacturers onwards – to deliver IoT at scale. And this is key. IoT can deliver efficiency, cost savings and revenue generation – but it will never inspire the trillions in investment predicted by the analysts if every deployment is bespoke. IoT will only become mainstream if the ‘as a Service’ model is adopted. In addition to building on proven, businessdriven applications, shrinkwrapping will release IoT from the constraints of expensive, bespoke projects and provide the channel with immediate opportunity to explore IoT’s compelling potential revenue streams.

February 2018 | 39

g. fast technology

Protecting the highway to the cloud There is no question that technology is speeding up access to the cloud. But how does it work? Phillip Havens, principal engineer of standards and applications at Littelfuse explains all and highlights the circuit protection challenges that come with it.


ecause a growing percentage of both business and personal information is stored in the cloud, it becomes more important than ever to ensure fast, reliable access to this information. Telephone-service providers have a strong financial incentive to offer packages of services that can compete with those once available only through cable companies. In other words, they want to deliver voice, data, video and internet connectivity to customers simultaneously and seamlessly.

40 | February 2018 technology for economical speed Fibre networks have begun bringing high speed connectivity to customers across the globe, but for telecom providers to reach into customers’ premises, they need to use the copper wiring they already have in place. Doing so will require technology, which allows customers to obtain fibre-like access speeds as telecom providers phase in their fibre deployments. Because of the enormous potential for expanding broadband coverage more economically, some industry

experts are forecasting that the global market for chips will grow to $2.9 billion annually. Here is how makes economical high speed connectivity possible: The telecom company installs fibre to a remote terminal (also known as fibre to the node, or FTTN) then branches out through the neighbourhood over ‘the last mile’ to the customer premises using the copper-wire infrastructure that’s already in place. technology employs a wide frequency bandwidth (up to 106MHz, with the potential of going as high as

g. fast technology

212MHz) to deliver the voice/data/ video/internet to the subscriber. Multipoint FTTN offers telecom companies a far more economical way to deliver high speed data to the customer. There’s no need to send someone out to the neighbourhood for every new subscriber; subscribers self-install the new modem in minutes and plug it into their own power system. circuit-protection challenges For high-bandwidth lines such as, the capacitance of any circuit-protection components placed on the line can degrade the signal, reducing its rate and reach. But modems and circuitry in the node can’t be left unprotected from lightning induced surges. Although customer-premises equipment (CPE) designers have three basic circuit-protection options - gas discharge tubes (GDTs), transient-voltage suppressor (TVS) diode arrays and protection thryristors - whatever they choose must allow their design to comply minimally with the surge requirements of TIA-968B (formerly known as FCC Part 68).

Weighing up the options GDTs, TVS diode arrays and protection thyristors each have their advantages and disadvantages when it comes to circuit protection.

GDTs The advantages of GDTs include surge-current ratings as high as 20kA and capacitance ratings as low as 1pF with a 0V bias. They are typically used for primary protection thanks to their high surge rating, but their low interference for high frequency components sometimes make them a possibility for high speed data links. But they also have some disadvantages for applications, including an excessively high initial voltage threshold – which means they may fail to activate at a threshold low enough to protect the circuit when a surge that exceeds the system’s normal operating voltage occurs. GDT performance characteristics that can change when installed in dark places, a relatively large footprint and thermal accumulation during power faults are also disadvantages.

TVS diode arrays TVS diode arrays are clamping-type components that offer a low voltage threshold turn-on values. Because of their clamping characteristics, however, they dissipate higher power levels and therefore must be physically larger to achieve surge ratings that are similar to those of the thyristor crowbarring components. This physically larger silicon package results in higher off-state capacitance values that could be incompatible with high bandwidth signalling.

Protection thyristors A protection thyristor is a PNPN component that can be thought of as a thyristor without a gate. When it exceeds its peak off-state voltage (VDRM), it will clamp a transient voltage to within the device’s switching voltage (VS) rating. Then, once the current flowing through it exceeds its switching current, it will crowbar and simulate a short-circuit condition. When the current flowing through it is less than its holding current, it will reset and return to its high off-state impedance. The advantages of nextgeneration protection thyristors for this application include a fast response time, stable electrical characteristics, long-term reliability and low capacitance. And because they are crowbar devices, voltage cannot damage them.

“Industry experts are forecasting that the global market for chips will grow to $2.9 billion annually.”

The latest protection thyristors are designed to protect telecommunications equipment to the high surge-level requirements of GR-1089 functionally, as long as it is correctly located in the circuit between the transformer and the DSL driver. The transformer attenuates the surge. The component can also be placed on the line side of the transformer if there is sufficient impedance between it and the entry point (typically an RJ11 connector), such as the high-pass filters implemented in these types of applications. Another advantage of the latest protection thyristors is their high surge rating (minimum 30A) which provides excellent protection for the modem when a lightning induced surge races down or across the tip and ring pair. The crowbarringtype component will look like a short circuit that diverts the surge current away from the line driver, preventing it from being damaged. The thyristor automatically resets once the surge event has passed, and the modem continues to run.

Conclusion Compared with the advantages and disadvantages of current GDTs and TVS diode arrays, the latest protection thyristors offering the most advanced crow-barring circuit protection, combined with technology, are on the way to making access to the cloud faster, easier and more reliable for both telecom companies and their business and residential customers. February 2018 | 41

Projects & Agreements

Iceland Telecom and Sensa partner with Verne Global Verne Global, Iceland Telecom and Sensa are joining forces to bring the best networking and hosting facilities in Iceland to the global market in a new partnership. Iceland Telecom intends to integrate four of its six hosting halls into Verne Global’s data centre campus, which already operates the biggest data centre in Iceland. The combined assets provide the ideal foundation for Sensa to expand and provide enhanced hosting and professional services in the rapidly changing data centre space. The CEO of Iceland Telecom, Orri Hauksson, commented that it is becoming more and more pressing for organisations around the world to process and leverage data assets. “We are preparing for the future and by collaborating with Verne Global, we are strengthening our business relationships and as a result the group as a whole,” he said. “Now we will have an operational base in the highest quality and most scalable data centre in Iceland and in turn we will ensure clients in the facility get access to the most robust telecommunications system in Iceland and redundant highspeed connections into and out of the country. The collaboration basically means that now we are expanding the options we have for our customers, we possess the latest technology and we are prepared for the ever-changing hosting and data processing market,” added Orri. Verne Global, www.; Iceland Telecom,; Sensa,

42 | February 2018

Infotecs and cloudKleyer forge strategic cloud and cyber security partnership to prevent cyber attacks Infotecs, the international cyber security and threat intelligence provider, and cloudKleyer, experts in cloud services, have reached a strategic partnership in the field of secure cloud services and cyber security. The importance of transmitting and storing data securely in the cloud keeps growing. Traditional firewalls and security products don’t provide enough security in multi-cloud architectures and hybrid IT configurations. That’s why a new generation of security solutions is necessary to respond to increasingly complex attacks. According to a PwC study, only 37% of all businesses have a response plan for a cyber attack. ViPNet Threat Intelligence Analytics System (TIAS) developed by Infotecs is a machinelearning software tool that automatically recognises and analyses attack trends and patterns based on internet data. The software issues warnings and behaviour recommendations for cyber threats in advance based on predictive analytics algorithms, which allows attacks against client data in a cloud environment to be repelled. Josef Waclaw, CEO of Infotecs said, “With our software, we are able to proactively recognise threats and respond to them. It helps reduce the number of so-called false positive incidents and keeps related security staff workload to a minimum. Teams can concentrate on high priority risks and thus perform their work more efficiently.” cloudKleyer,

Snowflake teams with MicroStrategy to deliver fast access and insight to enterprise data Snowflake Computing and MicroStrategy have announced a technology partnership to serve joint customers with a flexible and highly scalable cloud-built data warehouse, with modern business intelligence and predictive analytics applications. Together, Snowflake and MicroStrategy help customers take full advantage of the cloud and modern, sophisticated analytics to get fast answers to their toughest data questions. The integration of MicroStrategy and Snowflake unifies data across the enterprise, giving customers a single version of the truth. The power, simplicity and high scalability of Snowflake takes the complexity out of fast, easy access to massive amounts of structured and semi-structured enterprise data, empowering business analysts to make data-driven decisions in less time. Sharethrough, a joint Snowflake/MicroStrategy customer, has streamlined and simplified its analytics pipeline and made data more accessible to users making critical business decisions. In the past, queries that took more than a weekend now take 30 seconds. Reports that ran in 45 minutes now take a few seconds, requiring the work of just one business professional. Sharethrough’s senior staff engineer, David Abercrombie said, “We moved our core warehouse to the cloud with Snowflake and brought in MicroStrategy to make data easy to consume for nontechnical business users. We’re now running large reports in less than four seconds and our analysts can make better decisions based on much richer data.” Snowflake,; MicroStrategy,; Sharethrough,

Projects & Agreements

Mercer and Fuel50 alliance to bring cloud-based career pathing solution to employers and employees Mercer and Fuel50 have announced that they have formed a strategic alliance bringing together Mercer’s career framework methodology and consulting expertise with Fuel50’s Career Pathing software to advance the future workforce. This alliance further enhances the Mercer Digital suite of offerings. “With estimates that only 50% of employees’ skills today will be relevant in 2020, current jobs will require new competencies and recruiting for existing roles will be more difficult. Our strategic alliance with Fuel50 prepares for the future workforce by offering employees a meaningful approach to career planning and by providing organisations with an effective digital process to drive workforce engagement and retention,” said Ilya Bonic, senior partner and president of Mercer’s Career business. Fuel50’s Career Pathing solution enables employees to navigate career progression opportunities within their organisation. This is done by providing a more intuitive assessment of employees’ current skills and interests, providing talentmatched pathing as well as opportunities for career-growth experiences and job openings in the organisation, assisting hiring managers with talent planning, and incorporating workforce planning intelligence for employees and managers. Anne Fulton, founder and CEO of Fuel50 said, “We’re creating an easy, compelling career experience for employees to grow their skills and talents, while allowing employers to grow, retain, and engage their own internal talent market.”

Leviton completes integration of Brand-Rex

IP House, a UK-based startup and supplier of high performance data centre colocation services, has partnered with UK cabling company HellermannTyton to deploy their RapidNet hybrid cabling system throughout its London facility. Located on the edge of the city’s financial district and close to Canary Wharf, IP House aims to serve customers in the finance, gaming and online gambling industries, whose businesses depend on high availability, high performance and ultra-reliable colocation services delivered in accordance with ambitious and strictly observed Service Level Agreements (SLAs). RapidNet is a British manufactured pre-terminated, pre-tested cabling infrastructure solution that enables copper and fibre cables to be terminated on the same panel, thereby saving space in data centres. Freeing up rack space for deployment of additional IT equipment is a primary concern for colocation providers such as IP House. “The beauty of the hybrid approach is that customers can utilise fibre, copper or a combination of both to be terminated on the same 1U unit,” said David Gagel, sales director at HellermannTyton. “It means less space is required for connectivity infrastructure, which in turn leaves more room for customers and revenue generating IT equipment in the data centre whitespace.”

Leviton has announced that its Brand-Rex subsidiary will change its name to Leviton. A manufacturer of network cable infrastructure for Europe, the Middle East and Asia, Brand-Rex has been a part of the Leviton Network Solutions division since it was acquired in 2015. With the full integration of the organisation, products, and strategic direction, this change provides customers with a more seamless message and experience. “Both Leviton and Brand-Rex are recognised leaders in fibre and copper network infrastructure, and now we are one business and one brand with global capabilities,” said Ross Goldman, executive vice president and general manager of Leviton Network Solutions. “This integration allows us to provide a more consistent customer experience.” Leviton also manufactures high performance specialty wire for military, rail, automotive, and aerospace applications at its Leigh, England factory, and these solutions will retain the Brand-Rex name as a product family identifier. “Over the past two years under Leviton, Brand-Rex accelerated its pace of innovation and made significant investments, and now as Leviton we manufacture a complete range of end-to-end systems,” said Ian Wilkie, managing director of the Leviton Network Solutions European headquarters. “The website, product documentation and packaging will change as we continue to progress as part of Leviton.”

IP House, ; HellermannTyton,


Mercer,; Fuel50,

UK colocation startup deploys hybrid cabling system from HellermannTyton

February 2018 | 43

Projects & Agreements

Arrow extends distribution agreement with Hewlett Packard Enterprise (HPE) to the UK Arrow Electronics has signed an agreement with Hewlett Packard Enterprise (HPE) for the distribution of HPE’s data centre and hybrid product portfolio in the United Kingdom. Arrow is already a value-added distributor of HPE products and services in nine countries across EMEA. Through its resellers in the UK, Arrow will offer HPE’s compute, storage, data centre networking and associated Pointnext services, including the Nimble and SimpliVity technologies that have been distributed by Arrow successfully for a number of years. Dan Waters, director of solutions at Arrow ECS UK said, “Arrow is solutions centric in its approach, and the addition of HPE’s portfolio brings a huge opportunity for Arrow and our channel customers. HPE’s portfolio further enables our customers to add incremental business across solution practices in business intelligence and analytics, data management, IoT, and next-generation infrastructure technologies. The collaboration will further expand the successful work we’ve already been doing with Nimble and SimpliVity.” “By adding Arrow to our UK HPE distribution channel, we are able to reach new customers looking to optimise their traditional IT infrastructure with secure, flexible solutions ready for the hybrid world,” said Mark Armstrong, HPE UK&I vice president of Channels and Alliances. Arrow,

44 | February 2018

AAF International joins forces with trade organisation Dutch Data Centre Association AAF International, a manufacturer of air filtration solutions, has announced its partnership with the Dutch Data Centre Association (DDA), the trade organisation that unites leading data centres in the Netherlands and promotes the data centre sector to government, media and society. By means of the partnership, both parties aim to further strengthen data centre performance. “Any business would agree that advanced, smoothly running equipment is essential for its performance,” stated Stijn Grove, director at DDA. “The capital-intensive data centre industry in particular is highly dependent on its machinery. Hereby, a longer lifespan naturally results in lower cost. Air filtration plays a vital role when it comes to durability, as it will prevent dust and harmful particles from damaging the various apparatus. We therefore gladly announce AAF International as our new partner, a global leader in the area of air filtration.” Stefan Berbner, chief operating officer Europe at AAF Europe added, “AAF has an in-depth understanding of the challenges and opportunities for data centre environments. With our new intelligent data tools we offer data centre customers in-depth understanding of their air handling systems’ performance and help to make the right decisions to save time and money while also reducing risk.” AAF International,

ProfitBricks chooses Cloudian HyperStore for S3-compatible and GDPR-compliant storage Cloudian has announced that German IaaS-specialist ProfitBricks has selected Cloudian HyperStore to add object storage to its cloud computing portfolio. With Cloudian HyperStore, ProfitBricks now offers an easy-to-scale solution for customers who require fully S3compatible cloud storage that is compliant with GDPR. ProfitBricks selected Cloudian after an extensive evaluation of multiple object storage vendors. Key Cloudian advantages included its modular scalability that allows capacity to be expanded up to a hundred petabytes or more without disruption. Also, Cloudian’s REST-API allowed for easy integration with ProfitBricks’ automated processes that accelerate routine management tasks. Because ProfitBricks’ registered office and data centres are entirely within German borders, its customers are now able to freely integrate public cloud in their storage portfolio environment while remaining compliant with strict German data privacy standards as well as with upcoming GDPR requirements. Cloudian HyperStore provides the industry’s most compatible storage platform for unstructured data consolidation, supporting use cases that include data protection, network attached storage (NAS) offload, media archiving, bioinformatics, artificial intelligence (AI) and more. HyperStore’s cost effective entry point and modular scalability make it easy for enterprises and CSPs to get started and then grow seamlessly with expanding demand. Cloudian,; ProfitBricks,

Projects & Agreements

Converging Data to expand following UK Steel Enterprise investment A Yorkshire-based dynamic digital disruptor, Converging Data, is on track to increase its workforce and broaden its services over the next six months following a £50,000 investment from UK Steel Enterprise (UKSE). Specialists in data analytics and cyber security within the financial services, healthcare, pharmaceutical and supply chain logistics sectors, Converging Data allows organisations to harness the value of their existing data using the ‘Splunk’ machine data platform. The investment from UKSE - a subsidiary of Tata Steel, committed to investing in businesses looking to grow - was awarded in November 2017 to assist with job creation and business expansion. Established in Sydney, Australia in 2013, UK lead Neil Murphy founded its Barnsley site in November 2015, and continues to work closely alongside its overseas division. Speaking of the investment deal, Neil Murphy commented, “The funding that we have received from UK Steel Enterprise is vital to our continued growth and expansion as we continue to drive the business forward. “Allowing us to appoint three new sales specialists, the funding will also ensure that we acquire the technical expertise required to support our services, furthering our national and international scope. “ Converging Data,; UK Steel Enterprise,

Cisco Completes Acquisition of BroadSoft Cisco has announced the completion of its acquisition of BroadSoft. BroadSoft accelerates Cisco’s cloud strategy and collaboration portfolio by adding the industry’s leading cloud calling and contact centre solutions to Cisco’s calling, meetings, messaging, customer care, hardware endpoints and services portfolio. More and more businesses expect fully featured calling, meeting, messaging and contact centre solutions with the ability to deploy them flexibly - on premises, in the cloud or as hybrid solutions to leverage existing investments. By combining BroadSoft’s open interface and standards-based solutions primarily delivered via service provider partners, with Cisco’s existing portfolio, the combined company will offer best-of-breed solutions for businesses of all sizes which will be delivered through VAR and service provider partners. Together, Cisco and BroadSoft will deliver a full suite of rich collaboration experiences to power the future of work. Former BroadSoft CEO Michael Tessler and his organisation are joining Cisco’s Unified Communications Technology Group, led by vice president and general manager Tom Puorro, under the Applications Group led by Rowan Trollope. Cisco,; BroadSoft,

Bisnode Finland achieves radical IT cost savings with new infrastructure solution from Proact Data centre and cloud service provider Proact has been selected by Bisnode, a European business intelligence provider, to deliver a new hyperconverged infrastructure solution to improve its IT operations in Finland. For Bisnode Finland, managing and processing huge amounts of data is at the heart of daily operations. When the company decided to update its businesscritical database environment, the ambition was to migrate the current multi-cloud IT infrastructure to a single, consolidated platform. The aim of this project was to simplify management, achieve outright cost savings, and to form a solid IT roadmap for the future. After analysing Bisnode’s requirements, Proact came to the conclusion that a hyperconverged infrastructure (HCI) solution would be an excellent fit from both a technical and business stance. Hyperconverged solutions feature highly integrated architecture that can be scaled easily while also taking up minimal data centre space. Bisnode was impressed by Proact’s design and chose to partner with the IT transformation specialist, opting to replace its existing environment with a single, integrated HCI system which delivers all compute capacity, while also managing storage, backup and networking. Besides delivering the solution’s enterprise-class hardware and software, Proact also provided implementation services. The new HCI solution delivers an immediate, substantial operating cost reduction for Bisnode, resulting from energy and space savings as well as efficiencies achieved by automation and ease of management. Proact,

February 2018 | 45

Projects & Agreements

ITV employs Spectra Logic archive solution for longterm preservation of digital content Spectra Logic has announced that ITV has selected two Spectra BlackPearl converged storage systems and two Spectra T950 tape libraries to protect and preserve the organisation’s digital assets long-term. The solution enables ITV to send digital assets to dispersed data centre locations on differing media types, to assure the ultimate safety and security of its content with a genetically diverse data preservation strategy. ITV is an integrated producer-broadcaster that creates, owns and distributes high quality content on multiple platforms. The network produces massive amounts of digital content that needs to be stored for many years. ITV was looking for a solution that was high-capacity, durable and scalable to support its current needs and future growth. ITV also required a system that was non-proprietary, open standard and highly flexible so that several creative departments within ITV could easily access and move content to and from the organisation’s archive. Brian Grainger, chief sales officer, Spectra Logic said, “The combination of Spectra’s BlackPearl and tape libraries streamlines ITV’s digital workflow, eliminating costly middleware and simplifying the management of their assets. It keeps their content safe and secure, while still allowing creative departments to quickly access data as needed.” Spectra Logic, 46 | February 2018

The Bunker joins The Cyberfort Group Data centre and managed service provider The Bunker, is the first business to become part of Cyberfort Group. Following investment into The Bunker from Palatine Private Equity in July 2017, Cyberfort has been created to bring together leaders in the fields of data security and create a truly end-to-end security proposition. The Bunker will be joined by other businesses in the coming months – through both acquisition and organic creation. This week, The Bunker has rebranded and launched a new customer-optimised website as it continues to increase its market share in regulated industries, including fintech, healthcare and public sector. The Bunker runs two of the most physically secure data centres in the UK. Purposebuilt to protect people in the event of a nuclear attack, these former command and control bunkers – acquired from the Ministry of Defence and US Air Force – now protect data from every potential threat that could compromise availability and integrity. They are armoured and nuclear bomb-proof. Andy Hague, CEO at Cyberfort Group said, “Cyberfort will bring together the very best in data security and we’ve started with the UK’s most physically secure data centre. The Bunker rightly has a reputation for long-term customer relationships and a strong security-centric offering and provides us with a world-class base to build from.” The Bunker,; Cyberfort Group,

Wincanton and Virtualstock partner to provide advanced Supplier to Customer service Wincanton, a British logistics company, and Virtualstock, a UK-based Software as a Service (SaaS) digital supply chain technology provider, have announced the details of a new strategic partnership. This move is just one element of Wincanton’s recently announced eFulfilment service, which includes partnerships with a number of innovative British technology companies. Virtualstock’s platform delivers integrated ‘Supplier to Consumer’ (S2C) fulfilment functionality, allowing one retailer to sell another retailer or supplier’s goods without ever having to stock or deliver them, driving significant opportunities for revenue growth, while protecting margins by mitigating the impact on operating costs. When allied with Wincanton’s collaborative logistics capability, the two companies are able to rapidly transform a retailer’s ability to quickly respond to an ever-changing market. Paul Durkin, director for home and e-commerce at Wincanton, said, “Rapid growth in online sales, coupled with new players in the technology market, is fundamentally changing the end customer experience for the better. This is putting retailers under real pressure to improve their customer service levels and, critically, to increase the breadth of their offer. “This new partnership with Virtualstock helps Wincanton’s retail customers to respond to this challenge. By integrating Wincanton’s recently announced eFulfilment service with the Virtualstock platform, we’re able to expand retailers’ portfolios, without them having to hold stock. Instead, Wincanton takes care of all the logistics, in a seamless and integrated way.” Wincanton,

Projects & Agreements

Analytic Engineering powers AI driven decision support system development at Verne Global

Redgate Software partners with Blue Turtle to introduce advanced database capabilities to South Africa Blue Turtle Technologies, one of South Africa’s enterprise technology management providers, has partnered with Redgate to introduce its database development and deployment tools to customers and resellers. Redgate offers many of the industry-standard tools for SQL Server database development, which are used by 91% of companies in the Fortune 100. Because they plug into and integrate with the same IT infrastructure used for application development, they enable companies to introduce advanced development practices like DevOps faster and easier. This is where Redgate Partners step in because, in an increasingly complicated IT environment, companies often need more than just software. Alongside it, they also require advice and help with the purchase, installation and customisation of the software, as well as training. An added attraction for both Redgate and Blue Turtle is Microsoft’s implementation of Azure data centres in Cape Town and Johannesburg to deliver a range of cloud services. By providing more reliability, faster speeds, and lower latencies than data centres based in Europe or America, it will allow companies to explore cloud services like Azure without the penalty of international connectivity costs or data protection concerns. Redgate’s offering for Azure databases ideally positions Blue Turtle to help companies explore the advantages of Azure as soon as the data centres come online. Redgate,; Blue Turtle,

Verne Global has announced Analytic Engineering, a German company pioneering the use of artificial intelligence (AI) in decision support system engineering, is moving their graphics processing unit (GPU) infrastructure to Verne Global’s Icelandic campus. This move enables Analytic Engineering to leverage the dedicated compute environments at Verne Global, which are specifically designed to support HPC and intensive computing applications. Analytic Engineering utilises large-scale combinatorics, discrete and continuous optimisation, and finite element simulations in its development process, which required a specialised data centre that could accommodate the niche design and power requirements for these types of highly intensive computing applications. “At Verne Global’s campus, we can grow our business faster and apply more compute resources to our programs than at any other data centre that we evaluated,” said Tobias Seifert, cso-CEO at Analytic Engineering. “This is a critical competitive advantage to us, as we look to deliver highly complex software solutions that enable our customers to iterate faster through applications driven by AI and machine learning.” According to Tobias, many data centre operators have been slow to prepare their business models and campus infrastructure to accommodate the rapid advances in computational loads and processing power. Verne Global,; Analytic Engineering, February 2018 | 47


Switch Datacenters launches global licensing model for rapid data centre deployment Switch Datacenters has launched its wholesale Data Centre as a Service programme enabling rapid data centre deployment on a global scale. The programme provides organisations the opportunity to license Switch Datacenter’s patented data centre technologies and obtain an integrated, full-service data centre infrastructure package with highly energy efficient cooling (calculated pPUE 1.03-1.06), modular power infrastructure and racks included. Heavy investments in R&D activities by Switch Datacenters have resulted in ‘state-of-the-art’ data centre infrastructure featuring Dutch engineered, patented indirect adiabatic cooling technologies; highly modular thus scalable solutions for power supply; and (remote) data centre management (custom DCIM software). Lately, substantial effort has gone into integrating these R&D efforts and bundling it into an integrated offering. The result? A highly energy efficient, fixed quality data centre design with a calculated pPUE between 1.03 and 1.06, utilising pre-fabricated components to reduce time-to-market. The actual pPUE figure will depend largely on the climate where

the data centre build is being located. Moreover, this data centre infrastructure is OCP-ready, which means that it is suitable for open rack systems based on Open Compute Project (OCP) principles. Switch Datacenter’s data centres delivered through the programme are being built in the Netherlands, then shipped to worldwide locations depending on customer requirements. Its built-tosuit data centre design has already been deployed for IBM in the Netherlands, for example. The design would be a good fit for a wide range of data centre deployments including large-scale as well as small-scale data centres, with potential electrical loads from five to 100MW. Switch Datacenter’s newly launched Data Centre as a Service programme is an end-to-end solution including the racks and deployment of security technologies onsite. Automated remote data centre management tools would allow data centres on almost any scale to run with little onsite engineering support. The programme with a global reach provides for engineering capabilities on-site though, to provide deployment quality assurance and ease of operation. Switch Datacenters,

xDSL Protection from Littelfuse has a targeted data rate of 1Gbps over 100m of single twisted pair (24AWG/0.5mm) cable using DSL-like technology. This TDD (Time Division Duplex) signalling is a major difference from the existing FDD (Frequency Division Duplex) DSL signalling. bandwidth will extend up to 106MHz (with the potential of going as high

48 | February 2018

as 212MHz) with the start frequency ranging from 2.2MHz up to 30MHz in an effort to avoid interference with existing xDSL services. may also employ ‘notching’ where it suppresses carriers at specific individual frequencies to avoid clashing with local RF services. The amplitude is very low as compared to existing xDSL services and thus the varying voltage across the SIDACtor component is very low. This results in imperceptible capacitance variance of the over voltage protection (OVP) component. Rate and reach testing has shown an acceptable loss of less than 0.2dB with the DSLP0xx0T023G6RP component included at the tertiary position. Additionally, the flow-through layout of this component reduces the impedance mismatching ‘stub-effect’ caused by non ‘flow-through’ PCB trace connections and provides for an

easier PCB design. The small SOT23-6 footprint conserves valuable PCB realestate space requirements. The IPP 8/20 surge rating of this DSLP0xx0T023G6RP series is 30A minimally with a typical IPP rating of 35A based on this waveshape. This should be sufficient for even the most severe exposure applications. The ‘Bias –‘ lead can be connected to the line driver ground with the ‘Bias +’ lead left open, so this solution provides both differential and common mode protection. Both ‘Bias –‘ and ‘Bias +’ leads can be left floating for differential only protection and finally for capacitance variance sensitive applications, the ‘Bias –‘ and ‘Bias +’ leads may have the appropriate polarity voltage (< VDRM) applied to further minimise any negative capacitance effects. Littelfuse,


Innovative product launch has its finger on the pulse PowerON has launched its most innovative suite of products to date, known as PowerON Pulse, which will enable organisations of all shapes and sizes to manage and secure devices, as well as monitoring system performance, directly from the cloud. Pulse DMS (Device Management & Security) is a flexible subscription service built on Microsoft System Centre Configuration Manager and Azure, that enables users to migrate, manage and secure devices directly from the internet. The cloud-based service provides enterprise-class system management capabilities, including secure operating system deployment, patch servicing and selfservice software deployment with zero on-premise infrastructure. It can be used to manage Windows 7 through to 10 and Windows 2012 R2 through to 2016. In addition, Pulse OM (Operational Monitoring) is a cloud-hosted monitoring solution that feeds health, performance and availability data from servers and workloads, into a single, easy to use, online platform. The launch of both products follows PowerON acquiring the intellectual property rights for cloud-based IT infrastructure monitoring product, Tibana from Nubigenus, before significantly redeveloping and enhancing it. Chris Lim from PowerON explained, “We’re really pleased with the feedback so far on Pulse from both customers, partners and Microsoft alike. For a long time businesses and organisations have been asking whether it’s possible to have the functionality of System Centre, delivered as a service, similar to the benefits that have been achieved with products such as Exchange Online. We’re delighted to have been able to make this happen. “Specifically, with Pulse DMS, our customers evaluating Windows 10 deployments are seeing great value in the cost benefits analysis savings, especially with the speed at which we can deploy and secure devices directly from the internet, whilst eliminating on premise infrastructure maintenance. There’s no doubt these products will save a huge variety of businesses, as well as public sector organisations, both time and money.” PowerON Pulse,

Intel Xeon D-2100 processor extends intelligence to Edge Intel has introduced the new Intel Xeon D-2100 processor, a system-on-chip (SoC) processor architected to address the needs of edge applications and other data centre or network applications constrained by space and power. The Intel Xeon D-2100 processor extends the record breaking performance and innovation of the Intel Xeon scalable platform from the heart of the data centre to the network edge and web tier, where network operators and cloud service providers face the need to continuously

grow performance and capacity without increasing power consumption. “To seize 5G and new cloud and network opportunities, service providers need to optimise their data centre and edge infrastructures to meet the growing demands of bandwidth-hungry end users and their smart and connected devices,” says Sandra Rivera, senior vice president and general manager of the Network Platforms Group at Intel. “The Intel Xeon D-2100 processor allows service providers and enterprises to deliver the maximum amount of compute intelligence at the edge or web tier while expending the least power.” Intel Xeon D-2100 processors will enable greater performance and hardware-enhanced security to the network edge in support of the growing number of workloads that demand more compute, analytics and data protection closer to endpoint devices.

For example, the new processors will help communications service providers CoSPs offer multi-access edge computing (MEC), which allows software applications to tap into local content and real-time information about local-access network conditions, reducing mobile core network of network congestion. This can enable uses cases ranging from 5G-connected cars, smart stadiums, and retail and medical solutions. Intel,

February 2018 | 49

final thought

Consider this Chris Wellfair, projects director at Secure I.T. Environments, looks at a way to expand your data centre that you may have considered out of reach.


odular data centres have for some time helped customers lower the cost of their new facility and overcome the challenges associated with such builds. They can provide secure facilities within existing buildings, or allow a data centre to be put in unusual locations where it would not be appropriate to have a new build,

50 | February 2018

either because of planning laws or space. Containerised solutions, which are often thought of as being the â&#x20AC;&#x2DC;toysâ&#x20AC;&#x2122; of the big data centre builders, such as Google, can also play an important role for much smaller installations. They can help mitigate space, deployment time, build complexity and cost challenges. In an emergency, containerised solutions can also

shine as part of a disaster recovery plan if already fitted out, where they can be rapidly deployed to a site as a temporary solution. If you are trying to overcome a specific DC construction or expansion challenge, and any of the following considerations strike a chord with you, then you should certainly consider a containerised data centre as a possible solution.

final thought

• Do I have site issues? In some locations it is simply impossible to house a new a data centre. This could be due to footprint, budget or even local planning regulations. Often in these situations, a container can be a solution accepted by all and implemented with a minimum of fuss or raised eyebrows from the CFO! • I can’t build on-site. There could be many reasons why you can’t build a data centre on site, for example, if it is a high security area, or the data centre is only needed in a disaster recovery situation such as a flood, so you want to keep it off-site. A

containerised solution can be fully designed, fitted out and tested at a separate location. It could even be running in a separate location, mirroring servers at the main location, and can then be dropped in as a ‘clone’ when needed. • I need it fast. If you need your data centre built quickly, then containerisation can substantially shorten delivery times. Many companies offer them in standard ‘ready to load’ configurations, but you can of course have the interior designed to meet specific requirements, if your partner offers this. • Mobile data centre. If you need to ship your DC once built, then containerisation is an excellent solution. Firstly, because units are designed to ship containing external dimensions, use the same interlock systems, and to meet or exceed the same rigidity and load standards, shipping them is a lot easier than sending individual components that must be re-assembled at the other end. Secondly, it is possible to get the container insured if it meets the correct international shipping container standards, giving you greater peace of mind.

“In an emergency, containerised solutions can shine as part of a disaster recovery plan.”

density applications where heat can be an issue, precisely because of the way containers are configured. Also, where there are particularly stringent demands, it is not uncommon to have a second container which is responsible for housing switchgear, batteries, UPS and cooling hardware, though these can also be housed within container ‘rooms’.

Pick the right partner Incorrectly, some view containerised solutions as a temporary solution that has a touch of Heath Robinson about them! To a degree this is understandable, after all they do look like an upcycled shipping container, but the technology in them is the same as that which would go into a ‘normal’ data centre build, or at least from the same suppliers. If you pick the right partner, then your container will be custom designed and built from the frame up, and will carry enviable Lloyds Register structural warranties to give you peace of mind. You’ll have extended or upgraded the data centre long before those warranties expire.

Energy efficiency

It’s an option to consider

Some worry about a containerised solution’s ability to maintain effective cooling and achieve strong Power Usage Efficiency ratings – the misconception is that they will fall short. Our own experience has shown that they can deliver the same high standards as modular or traditional data centre builds. As outlined above, this is because they use the same equipment, including monitoring systems. They are well suited to high

Containerised data centres are not a replacement for a modular room or bespoke data centre build, they are simply another option. As we have seen above, in certain situations their advantages may make them perfectly suited to the challenges that you are trying to overcome. The important thing is to consider each option on its merits and select the solution that meets both your strategic IT goals, as well as the future plans of your organisation. February 2018 | 51

Data Centre News is a new digital news based title for data centre managers and IT professionals. In this rapidly evolving sector it’s vital that data centre professionals keep on top of the latest news, trends and solutions – from cooling to cloud computing, security to storage, DCN covers every aspect of the modern data centre. The next issue will include a special feature examining software and applications in the data centre environment. The issue will also feature the latest news stories from around the world plus high profile case studies and comment from industry experts. REGISTER NOW to have your free edition delivered straight to your inbox each month or read the latest edition online now at…

DCN February 2018  
DCN February 2018