Page 1


data centre news


Case Study: Meet Me Room:

Special Feature Security

November 2017

Mark Gaydos of Nlyte gives us his insight on entering the industry.

Prestige Nursing + Care’s IT infrastructure is on the road to recovery thanks to ONI.

Powerful Distinctive Unique Geist locally manufactures reliable intelligent PDUs, that arrive when you need them and are 100% tested before leaving our facility. See how we do it


in this issue…

data centre news

November 2017

November 2017 4 Welcome

32 Multi-cloud Strategy

Safety first.

Stephen Hampton of Hutchinson Networks highlights what organisations should be taking into consideration when it comes to adapting their infrastructure.

6 Industry News Despite it being a very real threat, why do young people remain uninterested in a career in cyber security?

12 Centre of Attention Paul Mills of Six Degrees Group discusses why digital transformation is in a business’s best interest and how you should go about it.

14 Meet Me Room Mark Gaydos of Nlyte Software gives us his insight on entering the industry and how his career path could’ve taken a very different turn.

16 Case Study Prestige Nursing + Care’s IT infrastructure is on the road to recovery thanks to ONI.

30 DC Acceleration Alok Sanghavi of Achronix gives us his insight on how to combat data centre slowdowns.

34 GDPR David Trossell of Bridgeworks explains the importance of knowing the value of the data you have, before making GDPR preparations.

36 Cabling Standards Valerie Maguire of Siemon provides us an overview of ISO/IEC and TIA’s new data centre-specific cabling standards.

39 Projects and Agreements Sega Games implements Tintri enterprise cloud technology to maintain application performance.

44 Company Showcase The new 500kVA UPS from Delta sets a world power density record.

46 Final Thought Detlef Spang of Colt Data Centre Services explains why time is of the essence when it comes to data and what organisations can do to help keep pace.


Case Study: Meet Me Room:

Special Feature Security

Mark Gaydos of Nlyte gives us his insight on entering the industry.

Prestige Nursing + Care’s IT infrastructure is on the road to recovery thanks to ONI.



19 Liviu Arsene of Bitdefender

tells us how organisations can enable data centre transformation using security.

22 Tim Mackey of Black Duck

Software discusses potential risks and considerations when it comes to data centre operations in a containerised environment.

24 Axel Brill of Hampleton

Partners explains where we are most vulnerable to cybercrime and the resulting opportunities for solution providers.

27 Mike Pickup of HESCO

discusses data centre growth and the implications to physical security that come with it.

November 2017 | 3

data centre news

Editor Claire Fletcher

Group Advertisement Manager Kelly Byne – 01634 673163

Studio Manager Ben Bristow – 01634 673163

Designer Jon Appleton

Business Support Administrator Carol Gylby – 01634 673163

Managing Director David Kitchener – 01634 673163

Accounts 01634 673163

Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173

The editor and publishers do not necessarily agree with the views expressed by contributors nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2017. All rights reserved.

4 | November 2017


omprehensive data centre security isn’t all that easy to achieve. Many of the same features that make the data centre so vital to business, i.e. masses of data storage, highly connected networks and support for cloud infrastructure, also make businesses incredibly vulnerable. Data storage will always be tirelessly targeted by hackers, hoping to swipe personal and company data for profit. But what defines a ‘hacker’ these days really isn’t really limited to the lone hooded figure typing furiously in a basement, lit only by the dim glow of the computer screen. Hackers are part of massive operations and they’re getting quicker and more cunning at an alarming rate. After Wannacry hit the headlines in May, it seemed the ransomware that temporarily crippled the NHS and a smorgasbord of other corporations was vanquished, only for a more advanced strain dubbed ‘Petya’ to hit a mere month later. This just

Claire Fletcher, editor

serves to highlight the fact there is always something stronger, better and faster waiting in the wings. This is why it’s so critical for data centre operators to ensure that their infrastructure is safe and all security processes are being followed rigorously. Not only must the network be safe from attack, it must be ensured that is it not brought down by failures, which could subsequently leave it open to infiltration. Hackers aren’t the only threat either. Physical security is just a prevalent too. What happens if there is a fire, flood or natural disaster? Who has access to your data centre? Can you recover what is lost? In this month’s issue, we explore a scope of topics surrounding security, finding out from the experts where we are most vulnerable and what we can do to mitigate the risks. Should you have any questions or opinions on the topics discussed please write to: claire.fletcher@


HANDLE YOUR DATA WITH CARE Data theft and economic damage is just as likely to come from direct access to physical infrastructure as cyber attacks and network intrusions. E-Line by DIRAK locking solutions offer controlled access to enclosures, cabinets and racks, anytime, anywhere: ■ ■ ■ ■ ■ ■ ■

24/7 controlled security access Seamless audit trail of all activity Local and remote control Installed into existing or new cabinets Two-factor authentication Real-time monitoring POE version available

For more information or a FREE ON-SITE OR REMOTE DEMONSTRATION contact one of our experts on 0115 925 6000 or

2bm Limited Eldon Business Park, Eldon Road Chilwell, Nottingham NG9 6DZ

architects of data centre change

industry news

Survey finds young adults’ interest in cybersecurity careers stagnant An annual survey commissioned by Raytheon Intelligence, Information and Services, Forcepoint and the National Cyber Security Alliance (NCSA) revealed that despite increased awareness of what a career in cybersecurity might look like, millennials remain unprepared for and uninterested in pursuing a career in the field. The fifth annual study, Securing Our Future: Cybersecurity and the Millennial Workforce, captures alarming trends among millennials including riskier online behaviours today than in 2013, despite the known consequences. Additionally, the survey showed the dominant share (43%) of survey respondents believe the final outcome of the 2016 US presidential election was influenced by cyber-attacks. These findings echo national sentiments as a recent string of large scale data breaches has shaken the public’s confidence in the security of critical information and infrastructure. “The demand for skilled cyber talent has become a national security issue,” said Dave Wajsgras, president of Raytheon’s Intelligence, Information and Services business. “While great strides have been made to increase millennial awareness in the cybersecurity profession, there is still work to be done.” For further information visit: CyberWorkforce

6 | November 2017

New ‘Upcycled’ HPC machine at Durham University helps space science Researchers specialising in astrophysics and cosmology, particle physics and nuclear physics at Durham University and from across the UK can now take advantage of an extended HPC service. The DiRAC data centric HPC system installed at Durham University has been enhanced by the deployment of COSMA6, a machine with 8,000 Intel Sandy Bridge cores and 4.3 petabytes of storage ‘upcycled’ from another system previously located at the Hartree Centre in Daresbury. This additional resource was needed to maintain the international competitiveness of the research community served by DiRAC for the next 12 months. COSMA6 enables researchers to extend large scale structure simulations of the evolution of the universe, analyse data from gravitational wave detectors, and simulate the sun and planets in the solar system.   The upcycled cores of COSMA6 added to the DiRAC data centric system installed at Durham University began life as a HPC machine at The Hartree Centre, Daresbury in 2012, but, as a consequence of upgrades, the centre no longer had space to house the machine in its data centre. The staff of the Institute for Computational Cosmology (ICC) at Durham University worked with HPC, storage and data analytics integrator OCF, and the server relocation and data centre migration specialist Technimove, to dismantle, transport, and rebuild the HPC machine at its new home. For further information visit:

451 Research predicts accelerated growth for key data centre markets in India 451 Research has published its first study of the growing data centre market in India. It predicts that the country’s colocation and managed hosting market could reach almost $2 billion in revenue by 2019, up from $1.3 billion in 2016. The firm also forecasts continued solid growth for the cloud computingas-a-service market, with a CAGR of 25% over the next four years as digital transformation takes hold in India and more businesses start outsourcing their IT infrastructure. Comprising IaaS, PaaS and ISaaS, the cloud computing market in India will reach $1.02 billion revenue by 2021 according to 451 Research Market Monitor. Hyperscale cloud and IT services providers looking to reach India’s vast potential market of customers are further driving demand for multi-tenant data centre capacity in the country. Analysts also reveal that 84% of India’s data centre supply is concentrated in the country’s five largest markets: Mumbai, Chennai, New Delhi, Bangalore and Pune. MultiTenant Datacenter Markets: Mumbai, New Dehli and Bangalore , finds that almost onethird of all built-out footprint in the country is located in Mumbai, due to the critical role the city plays in Asia-Pacific’s financial services industry, its large population and multiple international subsea cable landings. For further information visit:

industry news

CNet Training approved as an Associate College CNet Training is proud to announce it has been awarded Associate College status by Anglia Ruskin University (ARU) in Cambridge, UK. CNet, whose UK headquarters is close to ARU, has been delivering the world’s first and only Master’s degree in Data Centre Leadership and Management, in partnership with ARU for three years. The Associate College status confirms a longterm working relationship with ARU and provides CNet with responsibility for full delivery of the Master’s degree programme. This official news comes as September 2017 saw the biggest intake onto the degree yet. This is another first and further accolade for the global leaders in technical education for the digital infrastructure education. The Master’s degree qualification will still be awarded by ARU with a full graduation ceremony and it continues to be recognised with the same academic integrity of all other ARU qualifications. The Associate College status carries with it robust quality and academic standards, which CNet is proud to uphold. ARU undertake ongoing inspections to ensure the programme continues to reach its quality standards, ensuring learners can be confident that they are receiving the best education possible. For further information visit:

Panduit opens London customer briefing centre Panduit EMEA has opened its new UK customer briefing and demonstration centre in London. The new facility, based in the heart of the city, provides 200m2 of data centre, enterprise and industrial automation demonstration, conference and office space. This facility doubles

the space available in the previous London site and expands the company’s customer experience capabilities in the EMEA market. The centre contains an operational data centre to illustrate the company’s and its partners’ hardware and software

capabilities. Visitors will interact with all aspects of the data centre including contained environment cabinets, white space security, heat and cooling systems, structured cabling and thermal analysis, control and management systems. Ralph Lolies, managing director at Panduit stated, “The new London Customer Briefing Centre expands our investment and capabilities for our UK customers. It provides a showcase for our world class products that will help define Panduit’s level of resources to a wider audience. The centre is located close to London’s Technology Hub and is within easy reach of key, road, rail and airports. This centre joins Panduit’s customer briefing centres in Brussels, Milan, Paris, Singapore, Shanghai, Mexico City, Schwalbach, Germany and Chicago. For further information visit: November 2017 | 7

industry news

Businesses are worried about managing data in the lead up to GDPR enforcement Ninety-three per cent of companies are worried about the storage of their data in the cloud after the General Data Protection Regulation (GDPR) and 91% are concerned about how the new rules will affect cloud services, according to new research from Calligo. The figures were in a survey of 500 IT decision-makers in companies with more than 100 employees and £15 million turnover, examining how businesses are preparing for the new regulation which comes into force next May. Despite the severe penalties for infringing the GDPR, only 14% said worries about meeting their obligations under the new privacy laws with the wide-ranging new rules for handling and storing data are uppermost on their minds. Security and breaches are the largest area of concern, selected by 41% of respondents. In relation to cloud services, 46% are concerned about the GDPR’s complexity, yet just 15% highlighted privacy. Less than half (49%) of respondents said continuing doubts about the Privacy Shield would affect their use of hyperscale cloud. Only 26% said they choose a cloud provider because they are confident about its GDPR effectiveness, whereas 41% make their choice based on scalability. In other findings, the research revealed that the average amount respondents said their organisation is willing to spend on preparing for GDPR is £1.67 million. For further information visit:

8 | November 2017

Shift to multi-cloud architectures requires new data management approaches A new report from BPI titled, Gain the Ability for Cloud Agility: Assessing Enterprise Capacity to Embrace a Multi-Cloud Strategy underscores that despite significant progress, companies early in the transition to the cloud are facing major challenges, including the need for new cloud skills, concerns about data geo-sovereignty and corporate policy compliance, security and the challenges of implementing a flexible multi-cloud strategy where data can be moved freely to the cloud of choice. Key findings in the report include: • A massive shift to hybrid-cloud and multi-cloud architectures is underway at leading corporations worldwide, and IT executives expect the transformation to accelerate. • A dramatic increase in unstructured data in the cloud is being driven by a new generation of real-time applications and connected devices – and the desire to mine that data to extract greater intelligence and value. • Enterprises face challenges in cost efficiently and seamlessly migrating and replicating data across multi-cloud environments, creating potential challenges for big data analytics, application performance, vendor lock-in, and regulatory compliance. • D ata replication can be costly and complex and executives are ready to embrace solutions that enable them to improve their technical abilities and economics in this area. For further information visit:

Lack of internal skills and technology driving UK businesses to partner with managed security services providers More UK businesses are looking to work with third party security specialists, driven by the lack of cybersecurity skills and the need for better technology, according to the 2017 Risk:Value report from NTT Security – which examines attitudes to risk and the value of information security. A global study of 1,350 business decision makers, the Risk:Value report shows that attitudes are changing towards outsourcing a company’s IT security to a third party as cyber threats continue to evolve, stricter compliance measures come into force, and demands on in-house resources are stretched to their limit.  According to the report, while only 6% of organisations in the UK are using a third party provider currently, 23% plan to use one. Another 29% say they might consider it in the future, although a minority (11 %) say they plan to keep their security processes in-house. Of those UK organisations using or planning to use an MSSP, nearly a third (31%) say it is because of a lack of internal skills and 27% want access to better technology. More than a quarter (28%) of respondents say it is more cost effective to outsource, although the main reasons for using a third party are for support with data storage (40%) and data management (35%), as well as assisting with cloud migration projects (15%). For further information visit:


industry news

Pulsant has announced the appointment of Kenny Lowe to the position of lead Azure solution architect. Kenny is a Microsoft Most Valuable Professional (MVP) in Azure Stack technologies, the only one in the UK. He will be working closely with Pulsant’s director of Microsoft Cloud strategy, Dr Stuart NielsenMarsh, focusing on the company’s newly launched Azure Stack and hybrid cloud proposition. DigiPlex has been recognised for an innovating culture that extends beyond its technical developments, being shortlisted for, ‘best marketing team’ and ‘best marketing campaign’ at the annual Global Carrier Awards 2017. DigiPlex is the only data centre operator to be shortlisted for these two marketing awards.

Netwise Hosting’s London data centre makes the switch to 100% renewable energy Netwise Hosting has officially made the switch to true 100% renewable energy at its fastfilling London data centre. This places the organisation right at the head of the field in terms of its commitment to a sustainable future, as one of the first data centre operators to push entirely renewable energy in the UK. Netwise will join ranks with the likes of Google, who are themselves committed to making the switch across their entire global estate by the end of the year.  This comes as welcome news to everyone in the industry, as current estimates predict global data centre power usage to top 500 terawatts by 2020. That’s over 3% of the world’s energy consumption, an unignorable figure that places data centres at the epicentre of responsibility for spearheading and supporting genuine renewable energy schemes.  Netwise Hosting, now two years on from launching its London facility, turned to specialist provider Ecotricity for the delivery of this ground-breaking change, as leaders in the supply of truly green energy.  Crucially, rather than relying on ambiguous carbon credits and misleading offset schemes, Ecotricity generates this energy itself using the wind, the sun, and soon the sea. This is directly in keeping with Netwise Hosting’s own ethos of owning and controlling as much of the service chain as possible.  For further information visit:

The latest addition to the Modulon DPH series of modular UPS systems has achieved the world´s highest per-rack power density of 55.6kVA per 3U module, while offering a range of benefits for mission critical applications such as large data centres. Calligo has opened a new office in London and has appointed Ross Worthington as managing director to support and grow its client base across the country. Other key hires that have been appointed are Chris Petrie, GDPR strategy consultant and Charlene Manning, business development manager. Equinix has been recognised by the US Environmental Protection Agency (EPA) as a leader in green power. Equinix was selected for recognition in the Excellence in Green Power category of the 2017 Green Power Leadership Awards. The annual awards recognise America’s leading green power users for their commitment and contribution to helping advance the development of the nation’s voluntary green power market. 

November 2017 | 9

on the cover

Datacloud UK


atacloud UK will be taking place in the heart of central London on January 31. This premier summit for IT infrastructure leaders will be happening the day before the prestigious Finance and Investment Forum. Both leadership events can be attended with one premier networking ticket. These two events are collocated together in the Marriott Hotel Grosvenor Square to allow the entire UK hosting community to meet during this imperative time for investment in data centre, cloud and the enterprises they service.  This timely event showcases the business opportunity open for users and customers internationally in a one-stop shop, offers unique networking and provides exceptional insight into the players, customers and influencers in this energised and enterprising market. For the first time, an event with a unique focus on the UK – the largest data centre market in Europe. The event will showcase

facilities and services, power availability and growth forecasts and the coming impact of Edge. The long running Finance and Investment Forum brings together stakeholders and financial players to assess the market outlook and identify key global opportunities, types of finance available, mergers and acquisitions, REITs and business models.

“This timely event showcases the business opportunity open for users Datacloud UK 2018 – and customers January 31 uniquely brings together internationally Datacloud enterprise customers with the in a one-stop leadership of data centre and cloud players, service providers, hosting, shop.” colocation and managed services companies. Attended by senior executives in colocation, cloud, end user organisations, outsourcers and investors the event brings for the first time a unique focus on the UK, the largest data centre market in Europe. As competitor markets vie for post-Brexit opportunities, the event will showcase facilities and services, power availability and growth forecasts.

Early Speaker Announcements:

Lara Lewington, broadcaster and presenter, BBC Click

10 | November 2017

Philip Aldrick, economics editor, The Times

Alfonso Aranda Arias, head of global data centre operations, IBM

Pablo Jejcic, head of cloud and infrastructure centre of excellence, Vodafone

Event highlights include outstanding content, extensive networking and business dealmaking opportunities.

Finance and Investment Forum 2018 – February 1 The first forum took place at the Cass Business School in London in 2007 and since then it has become recognised as the only annual international meeting point for investors, operators, professional intermediaries and legal counsel focused on the data centre and cloud sectors. The objective of the one-day forum is to bring together stakeholders and players in finance and investment with cloud services, with leading investors, carrier neutral and carrier owned operational companies, government and regional investment authorities, property interests, and law firms.

Key questions for Datacloud UK and FIF 2018: • What are the opportunities for colocation in the UK? • How will UK and global markets sustain competitive service and energy offerings? • With migration to third party facilities set to rise what capacity and connectivity is available? • Where will the next mega data centre deal come from? • Who will be the winners and losers with the advent of Brexit and GDPR? • What are the opportunities and the impact of edge? For further information visit:

centre of attention

Transformers Paul Mills, managing director at Six Degrees Group discusses why digital transformation is in a business’s best interest and how you should go about it.


o the phrases ‘our company is undergoing a digital transformation’, or, ‘our business has just completed its digital transformation journey’ sound familiar? They should, the term ‘digital transformation’, or ‘DX’, is one of the newest tech buzz words that’s often bandied about in boardrooms and IT hubs, but what exactly does it mean, why is it important, and how can it be applied?

12 | November 2017

DX defined First, let’s consider the question – what drives transformation? From a business point of view three key factors are responsible: Changing consumer demand, technology and competition, which together effect changes in a market. An evolving business can spot these changes and opportunities and transform accordingly to stay relevant and successful.

Second, what does transformation mean in the digital sense? If one thinks of ‘digital’ as any technology that connects people and machines with each other or with information, then ‘digital transformation’ involves a business reorganising its systems and infrastructure to avoid a potential tipping point caused by outdated digital technologies and downward market influences.

centre of attention

Image: Andrew Basterfield CC BY-SA 2.0

A matter of survival Why should an organisation ‘digitally transform’? Digital transformation helps an organisation to keep up with emerging and future customer demands. Embracing business practices that are focused on integrating current and emerging technologies such as cloud, mobile, social media, artificial intelligence and data storage, can enable a business to: • Access software, new functionalities and updates faster • Better focus its talent resources and research and development investments on solutions that meet its unique needs • Facilitate and support work that can happen anywhere, anytime • Leverage insights to drive more accurate sales, marketing and product development decisions A great example of digital transformation is Netflix, which started as a mail based DVD rental business in 1997. Today, it’s a highly successful video streaming service that delivers customised content offerings based on customers’ preferences. Without digital transformation, legacy systems constrain agile development – preventing a business from achieving faster delivery of services and efficient cost management. Outdated architecture often struggles to keep up with the volume and type of data being produced in the age of modern business. In addition, there are serious concerns regarding the management and security of massive amounts of data, particularly as businesses become more dependent on it. Without transformation, IT teams are always behind the curve. They can’t integrate new ideas, services and tools on-demand – they are at the mercy of their legacy

application stack and lengthy release cycles. It was never going to be easy. On paper the advantages of digital transformation are great, however putting the theory into practice can present far more challenges, including: • Organisations have legacy technologies that are deeply ingrained in their business processes and change is always hard • A shortage of the right leadership teams to manage this kind of initiative • A shortage of industry expertise and skills needed to execute projects like these • Budget constraints that limit the ability to invest in the replacement of new systems • Organisational cultural constraints that make adoption of new technology harder So how does one get there? What can IT teams do to ensure their businesses’ digital transformation journeys are as smooth and efficient and possible?

Join the DX voyage The journey begins with understanding where the business currently is and what it needs, a discussion of the company’s core values, what services are currently offered by IT, and what needs to be achieved. Next, execute a transformation strategy that supports the company goals and the people who work there. Enlist the expertise of an external service provider to help guide the process and, if needed, outsource some of the functions to a third party. Here are a few suggestions. Simplify existing IT: IT is complex and every data centre

is unique to the business it serves. Complexity often leads to mistakes, longer processes and increased costs across the board. Keep IT flexible: The modern data centre is underpinned by the virtualisation of hardware, which provides faster and more flexible IT resources. Ensuring unobstructed compatibility between storage systems and cloud platforms, providing ultimate cost saving freedom without complication or impediment is vital. Leverage multiple clouds: Organisations should opt for solutions and tools that make provision for the use of private, public or hybrid cloud, where data, workloads and applications can be moved from one platform to another with a simple mouse click. Maintain IT resilience: The explosion of big data is key to digital transformation. As sets of data move into the petabyte arena, global organisations that need to run 24/7 should utilise continuous replication to ensure data protection. This also enables data mobility so that companies can fully embrace a cloud strategy. Creating a transformational culture: At a fundamental level, to make any digital transformation a success, organisations need to consider how their current operation might slow down a more agile approach to ongoing improvement. This new breed of cloud based businesses are working in a far more iterative nature, which means almost constant transformation. This change in culture is something, that if not considered, can hault DX strategy before it has even begun. With the right preparation and tools in place, the digital transformation journey can be a positive experience with impressive business results. November 2017 | 13

meet me room

Mark Gaydos - Nlyte Mark Gaydos, chief marketing officer at Nlyte Software has over 20 years of enterprise software marketing experience. He has held his current position for over three and a half years and is responsible for marketing and global inside sales. Prior to this appointment he served as senior vice president for marketing at Engine Yard, having also held a variety of marketing roles at Oracle, SAP and Comergent. What is your main motivation in the work that you do? I’m passionate about technology and enjoy working with people as well as helping customers get value from our solutions, which in turn helps them add value.   What are the biggest changes you have seen in the data centre industry in a nutshell? The biggest changes I have seen would have to be the massive increase in choices for computing infrastructure. From cloud to colocation to owned facilities, it is forcing everyone in the infrastructure space to improve efficiency and transparency just to stay relevant. 

14 | November 2017

Mark thinks an internship is the best route to take for young people looking to get into the industry

What, in your opinion, is the most important aspect of a successful data centre? It boils down to planning and collaboration. Too many operations such as apps, Dev Ops, ITSM, security and finance depend on infrastructure. For all these to work in harmony things have to be coordinated and made transparent.   Which major issues do you see dominating the data centre industry over the next 12 months? The increased use of a hybrid model and understanding what skills an organisation needs to keep in house and which they can outsource, at cost and risk, of course.

Can you tell us about any projects you are currently working on? Helping support the increased demand for Nlyte data centre management (DCIM) solutions such as Discovery and Nlyte 9.0 in Europe, Middle East and India. How would you encourage a school leaver to get involved in your industry? What are their options? Reach out to infrastructure directors and offer to perform an internship. Even if they don’t have a position right away they may be able to create one if they have someone eager to work. Failing that, if they don’t have something in the short term they will keep your details on file and will look you up when they have something.

meet me room

In addition to earning a living, how else has your career created value in your life? Some of my best friendships have come from co-workers and customers I have met on this awesome journey. As they say, the tapestry of life is made up of the people who are in it - Zuzu’s Petals.   What’s the toughest lesson that you have learned in your career? Even if you make an air tight logical argument to proceed with an action, it doesn’t mean the powers that be will take that path. However, deal with it, move on and keep creating value.   What gives you the greatest sense of achievement? Helping others. Whether it’s employees, customers, past employees or strangers, helping others is very gratifying. After all,

gratitude is the water to the tree of life and we should all be helping to keep the garden beautiful. Can you remember what job you wanted when you were a child? A fighter pilot or a businessman. My eyes weren’t good enough to take to the skies so here I am, Mr. Businessman. I’m sure I would have looked good in a flight suit and aviators though! If you could possess one superhuman power, what would it be and why? I wouldn’t go for the classics, invisibility or super strength but the power to heal others. I’d know I was doing my best every day to make a change no matter what shape that took. We can all take a little more time to help each other out. Not all heroes wear capes.

Mark’s chosen vocation had he not taken a business path.

ULTIMATE FLEXIBILITY FOR CRITICAL POWER NEEDS Protectplus M400 – the latest addition to the AEG Power Solutions range of modular uninterruptible power supplies, is based around a 10 kVA power module and can be scaled up to 40 kVA. Up to

four frames can be operated in parallel achieving a power capacity of 160 kVA. Protectplus M400 has one of the lowest Total Cost of Ownership (TCO) factors in its class. You can trust AEG PS to

What is the best piece of advice you have ever been given? Pay yourself first. Second, the toughest thing in life is telling people things they don’t want to hear.   What’s your personal pet peeve? People asking experts questions, but in a way that it is not actually a question but a statement, trying to show that they are the expert. That really gets on my nerves. Let the experts be the experts, you’re not impressing anyone.

DCW Frankfurt 28 – 29 November

Booth 570 – see the newest additions to our range

protect your mission critical business, IT and data center systems. Our UPS systems are world class and backed by engineering experience developed over the last 100 years. I +44 1992 719200 I

November 2017 | 15

case study

Road To Recovery Prestige Nursing + Care has been providing home-based nursing and care services in the UK for over 70 years. However, the company’s IT infrastructure was severely lacking, resulting in reduced connectivity and little resilience. With 2,500 staff and 45 offices across the country, this wasn’t an option, so Prestige called upon ONI to design a solution that would cure all. Challenges When Prestige first engaged ONI, the agency was facing multiple IT-related business challenges. A growing organisation, but with a relatively small IT team, Prestige had previously decided to outsource key IT infrastructure services relating to voice, data centre and wide area networking (WAN) to a variety of 16 | November 2017

vendors. Various challenges relating to these services had arisen. The consumer-grade ADSL on which the company’s WAN relied did not provide quality of service management (QoS). As a result, call quality delivered by Prestige’s Avaya-based, cloud hosted VoIP platform was variable and could not be guaranteed.

Furthermore, the VoIP solution could not support video conferencing, which the company needed for remote staff interviews. Making heavy use of out-of-hours call forwarding, Prestige needed to be able to make changes to this service quickly and easily, at any time of day or night. The existing telephony solution made remote

case study

administration of such changes difficult, and there was no suitable, responsive managed service alternative available. Slow response times and limitations in the existing data centre vendor’s service resulted in data centre outages. As a result, Prestige was forced to rely on firewalls beyond its end-of-support dates, which was subject to security vulnerabilities, potentially putting the entire business at risk.

Solution With the first of Prestige’s outsourced contracts due for renewal, ONI recommended an MPLS WAN to deliver the higher bandwidth and QoS required. The MPLS WAN also provided flexibility over the contract term, proactively providing bandwidth increases at each location as local exchanges were upgraded. During discussions following up the WAN project, it became clear that Prestige needed help with its managed firewall. The agency’s existing data centre provider had proved unable to keep the firewall updated from a support and security vulnerability perspective. ONI designed a resilient, managed firewall as a service solution based on Cisco adaptive security virtual appliances, running out of ONI’s Tier 3+ data centre facility in Luton. Through its delivery of the managed firewall, ONI was able to demonstrate the capabilities of its full range of Nimbus cloud services solutions. This prompted Prestige to ask ONI to design, 10 months ahead of the termination of its existing data centre contract, a highly resilient cloud based infrastructure as a service (IaaS) solution, complete with cloud based disaster recovery and backup services. Working closely with Prestige in a highly consultative manner, the ONI team augmented the company’s limited IT

team resources with its own considerable industry experience and expertise. ONI offered Prestige a series of solution options, each with clearly explained features and benefits, allowing the agency to make a fully informed business decision on the best way forward.

Care in the cloud With Prestige’s existing cloud services provider withholding access to key virtual disks, migration to the new IaaS cloud service platform could have been problematic. ONI’s experienced cloud services team addressed this issue with a bespoke applicationlevel migration plan to mitigate risk in the data centre move. Prestige’s ONI cloud services went live one month ahead of schedule. During this critical period in Prestige’s development as a business, the company’s ITC infrastructure was essential to the achievement of the organisation’s strategic goals. It had to not only work reliably, but also be simple to use and manage. Telephony infrastructure in particular was key – a vital communications channel between branches, franchises, thousands of nurses and many more thousands of customers. With Prestige’s outsourced cloud based voice solution contract due to end, there was a valuable opportunity to leverage the new WAN and cloud based data centre services to deliver cloud based unified communications (UC) to all of the agency’s 45 locations. Having a single provider designing, managing, supporting and optimising its key, interdependent IT systems would allow Prestige to deliver a much improved service to its internal and external customers. A series of UC workshops for Prestige’s board level decision makers allowed ONI to assemble a comprehensive understanding of Prestige’s needs. Using Cisco’s

“Slow response times and limitations in the existing data centre vendor’s service resulted in data centre outages.”

Call Manager UC, ONI designed a resilient, cloud based UC service solution that not only fully met Prestige’s basic communications requirements, but also added new functionality, enabling the agency to communicate internally and externally in new, more efficient and more cost effective ways. The new UC service delivers: Voice and video calling; voicemail; instant messaging; presence information; softphone clients for IOS and Android mobile devices and a fully managed service including administration of moves, adds and changes, and 24x7 technical support. “ONI’s unique and interactive approach meant they quickly grasped a thorough understanding of our needs and were then able to advise us on various solutions, rather than offering one option. This ensured we knew we were getting the best value from the solution we selected,” said Wayne Giles, IT manager at Prestige.

Benefits • Reduced technology duplication, complexity and cost, saving time, money and effort • Consistent user experience, independent of location • Improved resilience and service availability • Support for strategic growth initiatives such as video • Internal IT resources freed up for strategic, value-add activities November 2017 | 17


• Tier 3 Data Centre

• Tier 3 Data Centre

• 24x7 Support

• 24x7 Support

• Carrier Neutral

• Carrier Neutral

• Security

• Security

99.999% Uptime

100% Uptime

Colocation and Cloud services. 100% uptime guaranteed and up to two data centre services free for 12 months.

Find out more 01582 211 530


Peak Performance Liviu Arsene, senior e-threat analyst at Bitdefender tells us how organisations can enable data centre transformation using security.


ith Gartner saying ‘the future of the data centre is software defined,’ the transformation may apply to all organisations, as some have different needs. While the benefits of having an entire infrastructure delivered as-a-service that’s fully automated are obvious, legacy security solutions sometimes hinder adoption.

The entire reason for building a software-defined data centre is to maximise hardware utilisation, which cuts costs and boosts efficiency and scalability. Performance is at the core of these digital infrastructures, and integrating security solutions or technologies should be seamless, and should not inflict performance penalties.

Legacy security solutions and technologies were built before the era of virtualisation and digitalisation, meaning they were designed mainly to satisfy the needs of physical endpoints. Since virtualisation is all about optimising resource consumption and boosting performance, traditional security solutions may not be an enabler November 2017 | 19


for SDDC, but instead become a detractor.

Security designed for scale and manageability A primary requirement of a security solution designed for SDDCs is visibility across the entire infrastructure, regardless of whether physical of virtual workloads are involved, or whether they’re deployed locally or in a cloud. Single-pane-of-glass visibility into the overall security posture of the organisation should be mandatory for a layered next-gen security platform, as it will help security admins manage, enforce and deploy new security policies and even security agents based on a company’s infrastructure and needs. Considering that endpoints and data centres have different security requirements, consistent security controls in heterogeneous and hybrid environments should have minimal overhead on operational teams, as well as minimum performance impact on the overall data centre infrastructure. One of the biggest problems of traditional security deployed in virtual workloads is the inherent AV storm caused by all endpoints individually fetching security updates. This causes serious bandwidth throughput problems and inflicts performance penalties for individual virtual workloads. A layered next-gen security platform should offer centralised scanning that offloads anti-malware processes from each protected virtual machine to virtual security appliances and enables VM density improvements while reducing energy consumption. Traditional security solutions also lack integration with 20 | November 2017

“The entire reason for building a softwaredefined data centre is to maximise hardware utilisation, which cuts costs and boosts efficiency and scalability.”

technologies such as Active Directory, hindering endpoint management by IT personnel. Since manageability is at the core of SDDCs, the security solution should allow for AD integration, and be both virtualisation and operating system agnostic. This would allow for endpoint security agents to be fully compatible with any virtualisation vendor – VMWare, Citrix, Microsoft – and with any operating system deployed within those virtual workloads.

Next gen EPP for combating advanced threats Since advanced and sophisticated threats usually involve bypassing existing security solutions, a next gen endpoint protection platform (EPP) should feature a layered security stack that can offer hardening and control, threat detection during pre-execution and on-execution, actionable events, as well as visibility and management from a single management console. This also means the security agent protecting virtual workloads should be as lightweight as possible and rely on signatureless security layers augmented by machine learning algorithms. Since advanced persistent threats (APTs) are all about exploiting known or unknown vulnerabilities within applications or operating systems, proactive exploit prevention technologies as well as continuous monitoring of processes and system events should be part of the basic capabilities of the security agent. Advanced exploit prevention technologies should be integrated not just within the security agents, but also below the operating system, by working together with

the hypervisor to identify any memory manipulation techniques that could be indigenous to advanced threats leveraging zeroday vulnerabilities. Reducing operational complexity in software-defined data centres should also be a priority for any security solution, as businesses adopting SDDCs are driven by business agility enabled by faster provisioning and reduced operational costs caused by freedup staff for maintenance operations. Consequently, an SDDC-friendly security solution needs out-ofthe-box integration with key SDDC technologies, while allowing for maximum VM density and optimal application performance.

Rethink security Rethinking security to address many of the pain points that SDDCs have in terms of security, management and performance will enable software-defined data centres to perform at peak performance, and stay secure. The overall security posture of an SDDC-adopting organisation should not be hindered by legacy security technologies designed for traditional data centres. It should instead adapt to the needs of digitalisation. The endpoint security continuum needs to include advanced prevention tools, additional security controls and advanced detection and response tools that integrate with a software-defined infrastructure and are compatible out of the box with the tools and requirements of the businesses adopting them. Security for these environments should involve a more responsive means of exploiting these IT infrastructures, while at the same time streamlining IT operations.

Reduce Operational Costs, Improve Capacity Utilization, and Lower Power Usage Effectiveness (PUE)* Driven by explosive data processing growth, Data Centre Managers face multiple, competing demands: reducing operational costs, improving energy efficiency, and optimizing available capacity, while sustaining a low total cost of ownership. To meet these demands while minimizing the risk to service levels, the available data centre space is often underutilized while being overprovisioned with excess power and cooling capacity regardless of actual IT equipment and space utilization. Today, a typical data centre consumes about 3-5kW per cabinet due to power and cooling concerns, while the available cabinet space can accommodate 15kW or more per cabinet if managed effectively.

As energy and construction costs continue to rise, over-provisioning and under-utilization are no longer sustainable. Energy costs related to cooling account for approximately 37% of the overall data centre power consumption and are one of the fastest rising data centre operational costs. RWL Advanced Solutions and Panduit offer a best of breed data centre infrastructure which includes energy efficient industry leading cabinet and connectivity solutions.

Visit data-center.html for further information

RWL Advanced Solutions UK 9 Devonshire Square, London, EC2M 4YF. +44 (0)207 084 6219


Proactive Or Reactive? Tim Mackey, technology evangelist at Black Duck Software, discusses potential risks and considerations when it comes to data centre operations in a containerised environment.


s open source technologies gain prominence within engineering teams, it’s important to examine how operations security impacts data centre operations. While data centres face governance realities limiting how applications are deployed and managed, they must also approach security in areas ranging from perimeter defences through to internal audit controls. The reality is that teams managing data centres are largely reactionary to external and internal threats. Layering in regulations such as GDPR and reacting to constant cybersecurity threats, you have an increasingly complex model struggling to keep up with the complex nature of breaches while adding to the responsibilities of DevOps teams.

22 | November 2017

Open source platforms Many data centre operators are familiar with open source platforms like Linux or virtualisation solutions like KVM or Xen. These tools allow operators to standardise and increase system deployment velocity while insulating workloads from underlying hardware issues. As open source development practices have become the norm, open source components are deployed in unexpected places. Network devices, firewalls, proxies, intrusion management systems, and even software defined networks (SDNs) are all powered by open source technologies. Developers of new open source projects often create the project to solve their unique problem. For them, a ‘good enough’ solution simply addresses their immediate

needs. They post their source code to a public location, and that code has an associated open source licence. Like any other software licence, an open source licence conveys certain rights and obligations. A ‘permissive’ licence means there is no obligation to share any derivative work, while a ‘reciprocal’ licence requires derivative works be shared. Companies adopting code based on permissive licences can innovate without sharing back to the community, but companies adopting code with a reciprocal licence must share their changes with the community. To understand the impact open source licences have on data centre operation, you need to understand how organisations adopt technologies. A decade ago most software was proprietary, and procurement involved buying


and deploying software solutions for a licence fee. Although commercial vendors of open source software such as Red Hat and SUSE exist, there are far more open source projects than commercial vendors. Adopting an open source software solution may be as simple as downloading a package that fulfils a majority of your requirements and deploying it. If the solution doesn’t meet 100% of the requirements, access to the source code allows developers to create the missing components. However, there’s no requirement for the core project to accept enhancements submitted via a reciprocal licence, thereby creating a fork in the project which is owned by the individual submitting the enhancements. (Permissive licences may create an internal fork.) Regardless, any fork represents a parallel development stream. Enterprise software incorporating open source components often contains intentional and unintentional forks of components. I described intentional forks above; unintentional forks occur when a component is embedded in a system while development continues for the original component. This creates a situation where the embedded version may diverge from how the original component evolves. The consumer then bears the responsibility to maintain any software components they have forked. This responsibility is key to managing open source-based systems in data centre operations. Whether a vendor supplies software, firmware or appliances, it’s critical to understand the composition of the deliverable to assess ongoing risk of ownership. There is potential risk of regulatory violation in case of a security event.

A bill of materials describing the open source components in the deliverable, help data centre operators reduce their risks from both vulnerabilities and licence risks. Where reciprocal licences are involved, this disclosure is in the form of source code. Since permissive licences don’t mandate source code access for the end consumer, alternative disclosures appropriate to the open source licences are necessary. Regardless of the licence, these components are software assets subject to the ‘patching’ and ‘ongoing security monitoring’ provisions of relevant governance regulations.

Containerised systems A containerised system is essentially a stack of open source software. This stack comprises an operating system (typically Linux), a container runtime (typically from Docker), a container orchestration system (such as Kubernetes, Red Hat OpenShift or Apache Mesos), and container images. These container images are built from ‘base images’ that often contain Linux operating system components coupled with application code to form a containerised application. To understand the risk associated with a given container image requires understanding the risk present in each layer of the stack and the composition of each layer within the stack and the container. DevOps principles of continuous integration and delivery allow containerised applications to be deployed at high speeds. Therefore, time to market and functional agility must incorporate security assessments at all phases of a delivery pipeline. For example, you must perform assessments of security risk during the integration phase. If your team cannot meet governance regulations, delivery of the application should fail.

“Time to market and functional agility must incorporate security assessments at all phases of a delivery pipeline.”

Similarly, data centre teams need automated security audits performed prior to deployment, to validate that it still meets regulatory requirements. Since properly designed containerised applications deploy multiple instances from an immutable container image, you can satisfy regulatory requirements for continuously monitoring for security risk through attestation of the container image. This ensures monitoring requirements have a minimal impact on overall system performance. Combined with the knowledge present within orchestration systems, this model permits the system to automatically perform an impact assessment for any new security disclosures. Such an impact assessment is a critical component of any remediation plan. Absent of a full understanding of the composition of deployed applications and dependencies within the deployment stack, an impact assessment becomes prone to error. Viable solutions to this problem have specific attributes: • Automatically scan all container images regardless of source and deployment model • Provide metadata to the orchestration solution so deployment policies can be enforced • Identify security changes as they are disclosed in deployable images • Map security disclosures to proof of concept exploits and provide actionable remediation guidance • Understand open source development, particularly regarding code forks Data centre teams struggle to keep abreast of the hundreds of new open source vulnerability disclosures per week. Without a comprehensive understanding of the open source code composition and an impact assessment, data centre teams will have difficulty responding quickly to any security incident. November 2017 | 23


Stronger, Better, Faster It is no secret that cybersecurity is now a billion-dollar business. Hackers are adapting faster to technological advances than ever before. Axel Brill, director at Hampleton Partners offers his insight on where we are most at risk and the resulting opportunities for solution providers.


he headlines speak for themselves, ‘Wannacry: 200,000 victims in at least 150 countries, hackers get 99 million Alibaba usernames and passwords; 70 million hacked Dropbox accounts; Attack on 500 million Yahoo user accounts’ – the list of criminal attacks on websites and web shops is vast and growing.

24 | November 2017

This is shown not only by media reports, but also by current industry figures. As Deutsche Telekom’s Security Report 2016 shows, the number of medium-sized and large companies that are targeted by cyberattacks at least once a week rose from 25% in 2013 to 41%. At the same time, digital crimes are taking on new dimensions. A hacker attack on the Equifax credit agency, where

cyber criminals captured data from 143 million customers, was only revealed at the beginning of September. The burglars not only gained access to names, birth dates or addresses, but also partly to social security numbers, credit card numbers and driver’s licence data. The combination of this data can be used to conclude mobile phone contracts or apply for loans, for example.


Only in April of this year, for example, Google removed 32 infected apps from its App Store. These apps have already been downloaded millions of times and, according to their orders, have been leaking the user’s data. Another major challenge in the area of cyber security is the increasing spread of the Internet of Things. This was demonstrated as early as 2015 by the case of a Jeep Cherokee in the USA, which was hacked at full speed. The attackers were able to influence all vital functions of the car – from the windscreen washer to transmission, brakes and steering. If such attacks are carried out on aircraft, nuclear power stations, hospitals or factories in bad faith, for example, catastrophes are inevitable.

Companies massively upgrade defence measures

Mobile devices move into the focus of hacker attention According to the Telekom study, the greatest dangers in cyberspace are currently still computer viruses (72%), data fraud on the Internet (70%) and misuse of personal data by other users on social networks (64%). But the cyber criminals are agile and adapt to technological innovations at breathtaking speed. While in the past, hackers targeted attacks on desktop PCs in particular, the triumphant advance of mobile devices has also led to attacks on smartphones and tablets, which are primarily Android-based and often contain sensitive personal or business data.

However, not only the cyber gangsters, but also the companies are learning. “Five years ago, concepts such as protection of critical infrastructures, APTs or ransomware were still foreign words in the top management levels, but this has now changed significantly,” according to Holger Suhl, German head of security specialist at Kaspersky. As a result, companies are taking more money into their hands to protect themselves against potential threats from cyberspace.

The market for solution providers is booming The solution providers, on the other hand, are benefiting from the increasing cyberattacks. According to a study by 451 Research, Hampleton, cybersecurityventures. com and, global revenues in this segment are expected to almost double from $1.38 billion this year to $232 billion in 2022.

“Cyber criminals are agile and adapt to technological innovations at breathtaking speed.”

The gigantic growth is not only calling more and more start-ups with new strategies and technologies for fighting crime, but also increasingly attracting investors who want to secure a piece of the pie at an early stage. Established security providers, on the other hand, are either buying up start-ups with new technologies or taking advantage of the opportunity to sell their own company. Current figures from the M&A sector show that 70 to 80 transactions in the cyber security segment have been concluded every six months since the first half of 2015. Most acquisitions to date have been made by j2 Global, followed by Cisco, Convergint Technologies, Symantec, Accenture and Proofpoint. The acquisition of Blue Coat Systems by Bain Capital is an example of how much investment in this area pays off for VCs. In 2015, the VC paid an impressive 2.4 billion dollars for the security specialist and thus 3.7 times its annual turnover. Just one year later, Bain sold the company for $4.65 billion, which is 7.8 times its annual sales. Currently, providers of antimalware software for investors are exciting takeover candidates. Increasing investments by large corporations and governments are spurring the fantasies of investors. In the first half of 2017 alone, 17 transactions were carried out in this area alone. There is also a high level of interest in the acquisition of mobile security. The headlines about spectacular takeovers for billions of dollars are therefore unlikely to break off in the future. On the other hand, it remains to be hoped that reports of spectacular hacker attacks will sooner or later be eroded by the growing sensitivity of companies and better and better anti-fraud measures. November 2017 | 25


Preconfigured Cabinets Oli Barrington, managing director UK & Ireland, R&M, talks converting logical designs into physical solutions, without the complexity.


any IT departments see layer one and layer zero in the data centre as an amorphous collection of widgets – a necessary (and costly) evil required to make systems work. Organisations are generally loath to spend money and ‘mind space’ on this. That’s why they often only do so when they run in to difficulties. In reality, this amorphous collection of widgets is – or should be – made up of multiple highly engineered systems that power, cool, connect and protect business processes. What’s more, these systems offer businesses real opportunities to reduce operating costs, build in agility and streamline MACs throughout the facility’s lifecycle. Ultimately, they can bring real and tangible benefits.


However, it’s easy to see why organisations postpone upgrading layer one and layer zero services for as long as possible. Disruption, risk, cost and the need for multiple skill sets are all deterrents. For many IT managers, it’s that one task that lurks on the ‘to do’ list – a can of worms that sooner or later will have to be opened. R&M has recognised this – and is now able to provide the market with a solution. Primarily aimed at enterprise, on-premise data centres, R&M’s preconfigured cabinets have connectivity built into the cabinet’s structure, offering 188 10Gb or 96 40Gb connections. These don’t require any valuable rack space and importantly, using MPO connectivity; all internal cabling is configured in the factory. Upon positioning in the data centre, inter-cabinet MPO trunk cables need only be plugged into the cabinet’s connectivity ingress cassette to link all 10 and 40Gb connections to the core network. An optional rear door heat exchanger that removes up to 45kW of heat is available and overhead pathway containment for both copper and fibre fixes directly to dedicated supports on the cabinet’s top cover. With this approach, cabinets become simple building blocks: precabled, pre-labelled and ready to use.

Benefits: • Density and performance optimisation

• Reduced cabinet space, floor space and labour costs

• Deployment in hours rather than days

• Integral cabling according to best practices

• Multiple LAN and SAN architectures and protocols supported

• Multiple connection types

supported: CAT6, CAT6a, CAT7a, CAT8, OM3, OM4 & OS2

• Enhanced physical security • Shorter patch cords – maintains tidiness within cabinet

• Optimised airflow minimises energy consumption and risk of heatrelated outages

• Reduction in required skills on site • No pathway systems suspended from the ceiling

• Predictable design metrics make it easier to budget for growth

• Replicating and rerouting switch ports is easy

• Simplified connectivity for

integrated and non-integrated switching fabrics

• Enables massive density of servers, network and storage hardware

• Suitable for HPC environments

• Integral preconfigured, pretested copper

• 86 per cent airflow optimising

• High-density zero U cabling system

• Reduced energy costs

and fibre cabling infrastructure

delivery of cold air to IT

• Raceway and basket attach directly to the top of the cabinet

• Colour coded, numerical labelling • 1550kg load bearing capability • Perforated, lockable, curved steel front and rear doors

• Energy efficient cooling 26 | November 2017

For further information get in touch with Reichle & De-Massari UK on +44 (0) 203 693 7595 or visit


Damage Control Threats to cyber security are now unfortunately commonplace, but that doesn’t mean threats to physical security within the data centre are any less prevalent. Mike Pickup, technical lead and quality assurance manager at HESCO, discusses data centre growth and the implications to physical security that come with it.


here are numerous reports and studies which indicate a growing change in data centre dynamics, some predicting a slow decline in the number of centres fuelled by the migration from small onpremise facilities in preference to the use of ‘mega data centres’. Regardless of this prediction, the physical protection of sensitive

and commercially valuable data is essential as the opportunity cost of losing or having your data compromised is significant. The protection of such facilities principally falls into two distinct areas, cyber and physical security, which ideally should complement each other as much as is feasibly or practically possible. Cyber security is designed to mitigate

an attack which is often aimed at inflicting damage or expropriating information for nefarious purposes. The systems in place are monitored, updated or improved as often as is required whereas unfortunately, the physical security aspect does not typically afford the same level of attention, often due to installation, time and through-life costs. The provision November 2017 | 27


and manipulating the surrounding environment to positively influence the behaviour of criminal activity. There are three overlapping strategies to the concept; natural access control, surveillance and territorial enforcement. Although CPTED is more aimed at reducing criminal activity, it could also help to mitigate a terrorist type attack where the aim is the denial of service rather than financial gain. 

of an appropriate level of ‘joined up’ security commensurate to the level of the threat/risk is paramount to its ability to be able to function securely.   From a physical security aspect, there are a number of options to consider when designing a protective layered system; the first being the determination of the threat, severity and level of risk associated with that threat. This in turn helps to develop the appropriate solution, as quite often each site and location may be different.

Best line of defence

It’s all in the design From a design perspective, it is often far cheaper to design security into a facility during construction than to do a retrofit. This can also allow for future-proofing your installation for subsequent improvements as the threat will undoubtedly change over time. It is this evolving threat which motivated the development of the HESCO Terrablock range of products as they are all HVM rated, surface mounted and provide either a temporary or permanent barrier option for the user. They provide a rapid and cost effective solution which can be tailored to suit the level of threat and available real-estate. There are many aspects of security which all need to be considered both from an internal and external point of view and is a complex problem to solve but to make them effective, they need to be undertaken as part of the layered ‘joined-up’ solution.   A common and quite sensible approach is to begin with the outer perimeter and work inwards towards the facility itself. A concept which is becoming more recognised is the phrase of Crime Prevention Through Environmental Design (CPTED) which is a systematic approach to designing 28 | November 2017

“Cyber and physical security should complement each other as much as is feasibly or practically possible.”

In terms of physical security this can be separated into two aspects, the use of the hardware to deter and deny the intruder and secondly the processes and procedures in place to monitor and control those measures put in place. If the threat is considered to be from an explosion resulting in blast and fragmentation from say a VBIED or from physical attack to damage or destroy the facility, then an effective perimeter solution offset from the facility will be key. This not only helps to create standoff, which is one of the simplest and most cost effective methods of mitigating the effects of an explosion, but it creates a clear delineation between the facility itself and the publicly accessible areas.  The exclusion space can then be more effectively and easily monitored with CCTV and other electronic surveillance and detection systems. This space will also allow for access control systems to be put in place such as vehicle rated bollards and other entrance control systems as well as flood lighting. In addition, the compartmentalisation of buildings within a facility should be considered, as this creates separation between structures thus increasing the level of functional redundancy into the system. As an added precaution

these buildings should ideally have their own life-support systems, which unfortunately will incur additional cost during constructional as well as the through-life costs.

Controlled access In terms of process and procedure, the tracking of personnel in and out of the facility should be strictly controlled and if access is controlled electronically then the use of biometric identification should be considered to reduce the risk of a stolen pass being used. All deliveries should be pre-booked, recorded and monitored and all visiting personnel escorted and monitored at all times. The issuing of passes should be closely controlled thus ensuring that only authorised personnel are allowed access to their relevant areas but also to ensure that people who have terminated their employment do not have continued use of their pass. As with most solutions, the maintenance, rehearsal, monitoring and auditing of the security in place must be undertaken on a regular basis to ensure its effectiveness. This will also provide valuable insight into how the threat may be developing or changing which in turn allows for any enhancements or changes to be made. The physical protection of both the data centre and the electronic data within is critical especially considering the opportunity cost of losing or having your data compromised. Not only is this commercially damaging but also in terms of the subsequent trial by media, which can be very damaging to the brand and reputation of the company holding that data.  An effective and adaptive layered approach to security is the key, combined with the continued review and rehearsal of policies and procedures.


With ever tightening regulatory controls and compliance requirements (e.g. PCI/DSS, FCA, GDPR, Webtrust), focus on information security and data protection has never been greater. However, physically securing, controlling & reporting on access to IT equipment is often overlooked or a secondary concern. To help organisations mitigate the risk of physical data breaches, RackANGEL from EDP Europe Limited provides a multi layered security solution to protect IT equipment installed in 19” racks.

• • • • • • • •

Retro-fitting rack security & access control solution that is cost effective, flexible and easy to implement. Provides real-time auditing that tracks and reports on activities and events. Issues alerts when unauthorised events are detected. Biometric access card compatible with Mifare, HID, iClass, DESfire & Legic CTC. Provides dual level authentication without the expense of replacing existing card reader infrastructure. Eliminates the need for storing and managing biometric data.

Enhance the rack security system with in-rack CCTV surveillance cameras. Provides real-time or historical footage of both authorised and unauthorised events.

RackANGEL Proactively Protecting Your IT Rack Environment

Call Us On +44 (0)1376 510337 or E-mail EDP Europe Limited

Unit 4 Europa Park - Croft Way - Witham - Essex CM8 2FN -

data centre acceleration

Speed It Up Could FPGA-based NICs be the solution to data centre slowdowns? Alok Sanghavi, sr. product marketing manager at Achronix Semiconductor Corporation gives us his insight.


ata centre acceleration has understandably become a cause in need of innovative solutions. With the use of cloud services continuously expanding, growth in Big Data analysis, and a general increase in the amount of unstructured data to be handled, data centres are urgently looking for ways of moving the vast quantities of data they deal with each day in a more resourceefficient manner. One of the principle tenets underpinning the scalability of data centre architectures in the last several years has been the reliance

30 | November 2017

on commodity rather than custom hardware. The focus has been on buying off-the-shelf hardware, customising its function with software, while minimising power usage, offering high performance and a high number of ports. At the same time, the primary approach has been to offer solutions that allow data centre architects to leave existing infrastructure (in particular, power infrastructure and ventilation systems) in place while simply swapping out components in a manner that allows for continual resource reuse and performance enhancements.

Of course, it is impossible to avoid the requirement for some degree of flexibility in these architectures. The real question is how much, and how this can be implemented in a way that does not affect overall efficiency, performance or scalability.

CPU orientated hardware A popular approach to solving this problem has been to lean on technologies such as Intel’s Xeon CPUs. Alongside the benefits of the wide support for Intel’s x86 architectures (and the

data centre acceleration

broad base of software available for Xeon) these devices have enabled ‘just enough’ softwarebased personalisation to provide the flexibility data centres require to make best use of their existing systems. Demand has been high, and Intel has done very well out of the Xeon range. However, it is now evident that the needs of data centres are starting to outstrip what this and other similar product ranges can offer. The disadvantage of using commodity, CPU-oriented hardware is clear; control-plane functions and admin/protocol instructions are well and appropriately handled by CPUs. However, CPU architectures struggle to remain efficient when faced with bit-intensive, packetbased processing (such as is found at layers four and below). In other words, CPUs are good at handling the word/ block structures at the network, transport and higher layers, but not the kind of throughput found at the lower OSI layers. As a result of these fundamental architectural limitations, data centre architects now find themselves struggling to maintain acceptable levels of scalability, performance and power consumption. Data centres need an approach that can combine the best of both worlds: Re-definability combined with the speed of dedicated network hardware. A potential solution might be to develop a custom multi-core CPU with hardware accelerators. But of course this goes against the ethos of ‘low cost, commodity parts’.

Field-programmable gate arrays (FPGA) Enter stage left, the humble FPGA. The best solution to the issues outlined above is the use of a NIC card incorporating an extremely

high performance FPGA. Such a NIC could support custom bitintensive tasks in a re-definable, flexible way, while delivering many of the advantages of a hard-wired, dedicated, custom-built solution. As demonstrated by the recent implementation of an Achronix HD-1000 FPGA into a NIC (the Achronix PCIe Accelerator board), if data centre architects are able to access an programmable NIC based around an appropriately high performing FPGA, they can free CPUs up from the burdensome task of handling pipeline executions and memory accesses, and instead address system memory directly in order to handle protocol stack processing and physical layer transactions. This structure is in stark contrast to the approach of existing NIC solutions, which unfortunately burden the system software stack with the processing load for ROCE, impacting both power and the overall system performance.

An ideal solution? Naturally, plenty of companies have considered the applicability of flexible logic to this problem. But the solution proposed here demonstrates a degree of performance, memory bandwidth and memory density that is far in excess of other solutions. The FPGA at the heart of this specific NIC has a number of hardened cores for memory management and L1/L2 Ethernet functions - six DDR3 controllers, two 10/40/100G Ethernet MACs and two PCIe Gen 3 controllers. As a result of having such a fullfeatured core, the NIC discussed here is able to demonstrate capabilities that present an almost ideal solution for data centre architects struggling to handle data traffic efficiently. It can handle 100Gbps of DDR

“CPU architectures struggle to remain efficient when faced with bitintensive, packet-based processing.”

bandwidth and 64Gbps of PCIe bandwidth - enough to support 40GE NFV applications and high performance OVS offload. And at the same time as supporting regular networking and tunneling protocols for conventional northsouth communications, this NIC can use ROCE/WARP to bypass CPU overhead for east-west transactions between servers. Generally speaking, future efforts to shape traffic efficiently within data centres will increasingly depend on high performance FPGAs. FPGAs now present the best way to maintain the versatility and flexibility of software-based solutions and avoid having to invest in totally custom hardware. This approach allows data centre architects to continue to optimise their use of the commodity hardware they have invested in, avoid expensive re-architecting of their systems, and continue to service the world’s ever-growing data needs. November 2017 | 31

multi-cloud strategy

Adapt to Change When adopting a multi-cloud strategy, it is important to ensure your private infrastructure is ready. Stephen Hampton, CTO at Hutchinson Networks, highlights what organisations should be taking into consideration when it comes to adapting their architecture.


he adoption of cloud is accelerating, and focus is moving to hybrid and multi-cloud migration strategies. This approach allows organisations to choose on an application by application basis, considering which environment fits their needs best; private cloud IaaS, public cloud IaaS, SaaS or PaaS. However, many enterprises do not consider the internal estate and how the on-premise architecture also needs to adapt to cloud. There are three considerations that Hutchinson Networks recommend organisations should consider when optimising internal environments for public cloud: Cloud connectivity requirements

32 | November 2017

over the enterprise WAN; infrastructure security beyond the perimeter firewall and thirdly, the key ingredients in building a true private cloud.

Cloud optimised WAN The enterprise WAN has, until recently, been a mixing pot of transport technologies - dark fibre, L2 and L3 Ethernet, frame relay, MPLS and Internet VPN. This gave system administrators the flexibility to engineer complex bespoke solutions to meet the specific needs of their business (or not, depending on the quality). These solutions are often expensive, demanding

considerable bespoke engineering (QoS, redundancy, WAN acceleration and encryption), and offer poor service, with little or no optimisation for cloud. Another key consideration in enterprise WAN is Internet breakout. For many organisations, Internet breakout has been centralised, typically through a head office or data centre, allowing for centralised security policy and cost savings on local circuits. As cloud applications are accessed over the Internet, consumers at branch offices are increasingly subjected to additional latency and low bandwidth links. As such, the following guidelines should be considered

multi-cloud strategy

when architecting a modern enterprise WAN:    • Local Internet breakout: This provides a shorter route to SaaS, IaaS and PaaS services based in the cloud. •S  D-WAN: SD-WAN provides organisations with otherwise inaccessible functionality such as transport independence and ZTD (Zero Touch Deployment). Additionally, many SD-WAN vendors have optimised connectivity to cloud services as well as optimal routing for IPSec VPNs. • Cloud transit: To secure high bandwidth in cloud environments, companies should select a provider such as AWS Direct Connect, Azure Express Route or Fabrix On-Net, enabling them to connect directly to a port on their network fabric. • High speed Internet pipes versus thin private circuits: Combining local internet breakout with SD-WAN means that Internet based IPSec VPNs are now a viable alternative to private circuits for critical traffic like voice. Enterprises should strongly consider the high costs of private circuits compared with the relatively low cost of high speed internet pipes.

Enterprise security The traditional security perimeter is becoming increasingly irrelevant. This is being driven in part by cloud but also by changes in the way we work such as mobility, where users on the outside are accessing services within the private cloud, while users on the inside are consuming public cloud on the outside. As a result, security solutions and services are simultaneously moving inwards from the perimeter to the endpoint and outward from the perimeter to the cloud. The perimeter firewall no longer provides

sufficient protection against a hardened attacker targeting a particular environment. When defining security architecture for hybrid and multicloud, enterprises should consider the following. •F  ederated Single Sign-On: Using solutions such as F5 APM (Application Policy Manager), organisations can federate user authentication across the internal estate, as well as a range of SaaS, IaaS and PaaS solutions. •E  ndpoint security: As the endpoint moves with the user, host security becomes vital. New solutions, such as Cylance go beyond traditional anti-virus and anti-malware to provide protection at the host level. • Cloud security: While local internet breakout can provide performance benefits, it also presents a challenge, as security policy can no longer be centralised in a data centre. Solutions like Cisco Umbrella can tackle this problem by applying URL filtering and providing anti-malware at the point of DNS resolution.   •A  nti-DDoS: With ever-larger botnets, the frequency and scale of DDoS attacks are steadily on the rise. When these attacks target Internet circuits, they can impact users’ access to cloud services. The most effective way to protect against DDoS attacks is from within the Internet core, using cloud security services such as F5 Silverline.

True private cloud Private cloud is a key component part of hybrid and multicloud. Only 10% of internal IT workloads represent true private cloud and in fact most simply provide virtualisation. A true private cloud will also involve infrastructure automation, a user self-service portal and utilisationbased billing.

“Many enterprises do not consider the internal estate and how the on-premise architecture also needs to adapt to the cloud.”

Below are some technologies that organisations should consider when designing private cloud platforms. • Infrastructure APIs (Application Programming Interfaces): APIs enable administrators to programmatically configure infrastructure using orchestration tools or languages such as Python, REST and ANSIBLE. • Software defined infrastructure: Software defined or centralised controller based infrastructures provide a single point of configuration for each component. When combined with northbound APIs, they radically simplify automation within private cloud environments. • Orchestration engines: To fully realise the benefits cloud computing can offer, enterprises need to consider orchestration engines. These products will automate the individual elements of the infrastructure (network, storage and compute), collect infrastructure inventory, provide billing information and in some cases also provide a user selfservice portal. • DevOps: Enterprises can close the digital skills gap in their organisation through nurturing in-house DevOps skills, giving them the flexibility to customise or even develop custom automation, billing, and frontend self-service tools.

Key takeaways Hybrid and multi-cloud strategies are not just about public infrastructure. Enterprises must consider the implications for their private environments too. Organisations have to think through their cloud optimised WAN, security beyond the perimeter and private cloud automation/orchestration as part of their hybrid and multicloud strategies. November 2017 | 33


Know Your Worth David Trossell, CEO and CTO of Bridgeworks, explains the importance of knowing the value of the data you have, before making any changes in preparation for GDPR.


DPR is an unstoppable train. It is no good burying your head in the sand thinking it will not affect your data centre business on the basis that you’re based outside the EU. Anyone who has data on European Union citizens is required to comply with the regulations. So, it doesn’t matter

34 | November 2017

whether you are an American company or an Indian firm; GDPR has a global reach. Time is also running out rapidly; the regulations come into force in May 2018, and so there’s no time to waste. If you haven’t started the process yet, then start saving up for the fines because these are not going to be cheap. For example,

last year, TalkTalk’s fine of £400,000 for security failings was big, but under GDPR the company could be given a financial penalty that equates to 70 times higher than this figure – to the tune of a whopping £59 million. While these potential fines are significant and potentially damaging from both a financial


“GDPR is going to have a dramatic effect on data centres before and after the deadline, so several key decisions have to be made regarding data discovery.”

and reputational perspective, there are still many uncertainties about exactly what you should do. What are suppliers doing to help the SME customers, other than trying to frighten them? After all, smaller companies aren’t going to have the revenue available that large corporations will have to protect them from going under.

Arguably, the best insurance policy to prevent any calamity is to invest in GDPR compliance now – whether you are a data centre or a customer of one. On either side of the customer-supplier fence, the impact of GDPR could be devastating if preparations are made for it now. So, suppliers – vendors – should be assisting their customers, both large and small, with GDPR.

Corporate challenge SME customers aren’t the only ones that need to be concerned about getting ready for GDPR. The larger the organisation, the larger the problem. Just think how many customers some of the large financial organisations have and how far back they go. How long have you been with your bank, pension or insurance company – all that data has to be found, read and catalogued. Some companies are looking at hundreds of thousands of archived tapes. GDPR is going to have a dramatic effect on data centres before and after the deadline, and so several key decisions have to be made regarding data discovery. Are they going to outsource it and then bring the result back in-house, or are they intent on running the discovery process in-house? Once in-house, will the storage and infrastructure be compliant now you have all the key data in one place? All this will create one of the biggest challenges to IT especially when addressing legacy IT such as archived tape, and so determining what data is stored in these archives and what it might mean to an organisation is a real challenge. Know where personal information lies as this is fundamental, and understand that search times need to be deterministic to meet GDPR requirements.

Saving costs Implementing an information management solution may save on data centre and management costs, according to Jim McGann, VP marketing & business development at Index Engines. Many organisations can free up 30% of their data, allowing them to manage their data more effectively. Organisations can gain positive results by cleaning their data, but public companies can’t just delete data randomly, as there are regulatory compliance issues. Organisations need to ask if the files have business value or any regulatory compliance requirements. For example, if there is no legal reason for keeping the data, then it can be deleted. Some firms are also migrating their data to the cloud to remove their data from their data centre. As part of this process they are examining whether the data has any business value to make their data migration decisions. GDPR adds yet another layer of regulation upon what already exists, so organisations, including data centres, need to know and consider what lies within their files.

Better together Bridgeworks helps organisations do this, by gathering and moving data from anywhere in the world. This movement of data allows organisations to understand what data they actually have. While solutions such as PORTrockIT can enable the secure acceleration of data between different points for indexing, storage and back-up, it isn’t a solution that can permit GDPR compliance in isolation. However, with storage provider Index Engine (alongside Argo), Bridgeworks has built a stack to help organisations realise a logical starting point in preparation GDPR. November 2017 | 35

cabling standards

Able Cable Valerie Maguire, director of standards and technology at Siemon, provides an overview of ISO/IEC and TIA’s new data centre-specific cabling standards.


oday’s data centres are facing unprecedented pressure to support the bandwidth needs of cloud-based business and storage solutions, video and online gaming applications, and emerging database technology. Ever increasing data traffic, especially machine-generated data from IoT devices, is driving the need for more and faster servers, core network devices and network cabling. In the data centre specifically, Ethernet speeds are now migrating from 10Gb/s to 40Gb/s and 100Gb/s. It is therefore critical for the cabling infrastructure to be correctly specified, designed, and administered for best support of networking and storage equipment connections. In response to these developments, the joint technical committee of the International Organisation for Standardisation

36 | November 2017

and the International Electrotechnical Commission (ISO/IEC JTC 1) is currently working on updating its data centre-specific cabling standard whilst the Telecommunication Industry Association (TIA) TR-42 Telecommunications Cabling Systems Engineering Committee has recently approved its revised data centre-specific cabling standard for publication.

“Ever increasing data traffic is driving the need for more and faster servers, core network devices and network cabling.”

Hard at work The popular ISO/IEC-11801, Edition 2.0, ‘Information Technology Generic Cabling for Customer Premises’ standard has undergone a major revision and as part of this effort, the content of the current ISO/IEC 24764, Edition 1.0: ‘Information Technology - Generic Cabling Systems for Data Centres’ standard has been revised. ISO/IEC’s revision has been published as ISO/IEC 11801-5 ‘Information Technology - Generic Cabling for Customer Premises Part 5: Data Centres’. TIA’s updated data centre cabling standard was approved for publication as ANSI/ TIA-942-B, ‘Telecommunications Infrastructure Standard for Data Centres’ in June. Thankfully, the work of both organisations is very well harmonised and it is important to note that both documents only address specifications for generic data centre and computer room cabling. The most important revisions include:

• Recognising the availability of higher performing balanced twisted-pair cabling, wideband multimode fibre cable, and parallel optical fibre transmission schemes, • I ncreasing the minimum grade of balanced twisted-pair and optical fibre cabling allowed in a standards-compliant installation, and • Incorporating other guiding parameters, such as the accommodation of deeper and wider cabinets, practices to avoid microbends in installed optical fibre cables and cords, and labelling and cabling management to facilitate patching.

A closer look at copper cabling In the last year, ISO/IEC specifications for class I (assembled from category 8.1 components) and class II (assembled from category 8.2 components) cabling and TIA specifications for category 8 cabling have been published. These new media types are intended to support the IEEE 802.3bq-2016 25G/40GBASE-T application over 30-metre channels containing two connectors specifically for ‘edge’ (i.e. server to switch connections) deployment in data centres. Naturally, the reduced length/ reduced connector topology has been incorporated into both ISO/ IEC 11801-5 and TIA-942-B. In

cabling standards

addition, ISO/IEC 11801-5 adds class I and class II media and TIA942-B adds category 8 media to the list of recognised cable types. In acknowledgment of the importance of supporting 10Gb/s transmission speeds in the data centre, the minimum grade of recognised twisted-pair cabling in an ISO/IEC-11801-5 compliant installation is now category 6A/ class EA and the minimum grade of recognised twisted-pair cabling in a TIA-942-B compliant installation is now category 6A.

And what about fibre? Looking at fibre optic cabling technology, ISO/IEC 11801-5 and TIA-942-B incorporate extensive guidance related to the use of array connectivity, such as multifibre MPO-12, MPO-16, and MPO32 interfaces. This is being done to support parallel transmission schemes employed in new Ethernet multimode optical fibre applications, such as 100GBASESR4 and 200GBASE-SR4, that support greater than 100Gb/s speeds over distances of up to 100m using OM4 optical fibre cabling and emerging 200GBASEDR4 and 400GBASE-DR4, which support greater than 200Gb/s speeds over distances of up to 500m using singlemode optical fibre cabling. Parallel signal transmission is a significantly more economical alternative to wave division multiplexing and relies on the use of more than just two optical fibres to achieve a given data rate. For example, four fibres transmitting at 25Gb/s and four fibres receiving 25Gb/s is a recognised parallel transmission scheme to realise 100Gb/s bandwidth. These emerging solutions will provide much needed flexibility for core network connections, especially in hyper-scale data

centres. The minimum grade of recognised multimode cabling in an ISO/IEC 11801-5 or TIA942-B compliant installation is 850nm laser-optimised 50/125µm OM3. TIA 942-B additionally recommends the use of OM4 or OM5 multimode fibre.

A comprehensive overview of various networking architectures includes the traditional three-tier data centre switch architecture as well as flattened fabric architectures, such as ‘fat-tree’ and ‘full mesh’, which provide low-latency and highbandwidth communication between any two points in a switch fabric.

Network architecture updates

Final word

In addition, both standards also provide guidance on distributor connection schemes and on various network architectures. ISO/IEC 11801-5 and TIA 942 B provide detailed guidance on the use of end of row, middle of row, and top of rack distributor connection schemes.

Both standards will provide valuable information on data centre and network design, facility planning, and cabling system specification to maximise quality of service and return on infrastructure investment and should therefore be added to the existing standards reference libraries. November 2017 | 37

next issue

Modular systems


Next Time‌ As well as its regular range of features and news items, the December issue of Data Centre News will contain major features on modular systems and UPS. To make sure you don’t miss the opportunity to advertise your products to this exclusive readership, call Kelly on 01634 673163 or email

data centre news

38 | November 2017

Projects & Agreements

Equinix closes acquisition of Itconic Equinix has now closed the transaction for the purchase of Itconic, a data centre, connectivity and cloud infrastructure solutions provider in Spain and Portugal, and Cloudmas, an Itconic subsidiary that is focused on supporting enterprise adoption and use of cloud services. Equinix purchased the companies in an all-cash transaction totalling €215 million (approximately $259 million) from The Carlyle Group. The transaction includes five data centres in total in Madrid, Barcelona, Seville and Lisbon, expanding Platform Equinix into Iberia. The acquisition of Itconic further strengthens Equinix’s position in Europe and will extend its footprint into two new countries within the region. The acquisition adds approximately 322,000 gross square feet to the Equinix International Business Exchange (IBX) data centre portfolio. With the current IT transformation underway, enterprises increasingly require lowlatency network connectivity, access to cloud service providers in top global markets, and interconnection with customers and partners across their digital supply chain to run their corporate IT. Equinix’s global data centre footprint now enables businesses in Spain and Portugal to evolve from traditional businesses to ‘digital businesses’ with the ability to globally interconnect with people, locations, cloud services and data. For further information visit:

EdgeConneX announces partnership with PacketFabric to bring next-gen networking to edge data centres EdgeConneX, has announced a new partnership with PacketFabric to deploy its SDN-based platform across the EdgeConneX portfolio of Edge Data Centres (EDCs). Initial deployments will be in the Portland, Phoenix, Minneapolis, and Santa Clara EDCs. Through this new partnership, EdgeConneX customers can now instantly provision highly scalable network connectivity between any two or more points across PacketFabric’s private network. With real-time connectivity between carrier-neutral colocation facilities, PacketFabric will provide EdgeConneX customers with terabit-scale network capacity delivered in seconds utilising the latest switching technologies and architectures. PacketFabric’s entirely automated SDN-based network delivers coast-to-coast connectivity to over 145 premier carrier-neutral colocation facilities across 17 US markets via its purpose-built private backbone network. Through its architecture, customers can quickly and easily procure and maintain network services in real-time, eliminating the need to deploy and manage costly infrastructure or rely on public internet access. EdgeConneX customers will also gain the opportunity to leverage PacketFabric’s advanced Application Program Interface (API) for unparalleled visibility and control over their network traffic and services, ensuring consistent and reliable performance. For further information visit:

Avast and Aircel partner to provide mobile and data security solutions to customers Avast has announced a strategic partnership with Aircel, one of India’s mobile service providers. Under this partnership, 85 million subscribers of Aircel will have access to Avast Mobile Security and Avast Cleanup (a performance optimisation application) as part of the Aircel Protect Suite. According to a recent study conducted by the Internet and Mobile Association of India (IAMAI), nearly 77% of urban users and 92% of rural users in India rely on smartphones as their primary device for internet access. The sheer amount of data shared via mobile devices, along with the increased risk to mobile traffic has spurred Aircel to take the necessary steps to safeguard its customer’s personal information and privacy. In the United States, Avast has already partnered with all major carriers to implement safety solutions such as Avast Controls and Insights (CnI) and Avast Locator. Avast CnI helps parents protect their children from cyberbullying by alerting them to critical calls and messages sent to their children’s phone, while Locator helps consumers always know where their loved ones are and find their lost phones. These features are also available to carriers in the Indian market. For further information visit:

November 2017 | 39

Projects & Agreements

Colt to upgrade subsea network globally and expand Colt IQ Network to North America Colt Technology Services has announced plans to upgrade and future proof its transatlantic, transpacific and Asian subsea cable capacity to 100Gbps, globalising its high bandwidth strategy with multiple routes that provide best in class redundancy and flexibility. This will enable all Colt IQ Network capabilities to be accessed around the globe, creating new opportunities for multinational companies with operations in Europe, Asia and North America. Colt will also extend connectivity into North America to meet the local and global connectivity demands of enterprises, ensuring the Colt IQ Network will cover the main cloud aggregation points around the world with multiple 100Gbps transatlantic and transpacific connections, plus additional 100Gbps links in Asia Pacific. The North American expansion will connect thirteen major telecoms and cloud hub cities in the US and Canada, including key data centres in Seattle, San Francisco, LA, Phoenix, Dallas, Atlanta, Miami, Chicago, Ashburn, Newark, New York, Boston, and Toronto, with end-to-end capabilities on Coltowned equipment. This will further expand Colt’s connectivity reach beyond 800 data centres internationally. US data centres will be accessible from enterprise buildings on the Colt IQ Network and high bandwidth datacentre-to-data-centre connectivity will also be enabled within the US. For further information visit:

40 | November 2017

SailPoint and VMware partner to deliver identity governance to modern mobile workforces SailPoint has extended its existing partnership with VMware to deliver comprehensive identity governance to the VMware Workspace ONE digital workspace platform powered by VMware AirWatch technology. Through this partnership, SailPoint will extend its existing integration with AirWatch technology to support a unified and global workforce while also supporting VMware’s digital workspace strategy. “Adopting cloud and mobile technologies securely is critical to building a successful digital workspace strategy. The challenge is how to manage access to this modern workspace as employees come and go and new devices and technologies continually enter the workplace,” said Joe Gottlieb, senior vice president, corporate development for SailPoint. “Enterprises need to extend their identity governance program to include this growing number of endpoints and platforms to securely accelerate their digital transformation journey. By integrating SailPoint’s open identity platform with VMware Workspace ONE, we’re delivering a unified approach to governing today’s global and mobile workforce to mutual customers.” Joe concluded, “This partnership underscores the importance of the identityaware infrastructure and the value of our open identity platform, which is critical to companies as they evolve into an increasingly digital workplace.” For further information visit:

Teradata teams with Dataguise to power enterprise data security and compliance Teradata has furthered its alliance with Dataguise to deliver DgSecure software to its global enterprise customer base. As a Dataguise partner, Teradata will benefit from Dataguise sales and engineering support for mutual customer accounts leveraging DgSecure for Teradata UDA and IntelliCloud deployments. Dataguise DgSecure simplifies data security regulatory compliance quickly and easily with an out-of-the-box solution that is administrator friendly and eliminates the need to program. The solution allows for the identification of all personal and sensitive data, including names, identification numbers, location data, online identifiers or factors specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of a natural person using a combination of machine learning and behavioural analytics so that organisations can be confident in the results. DgSecure then provides the ability to protect, monitor, and regularly audit this information. “The wave of IT security breaches continue to threaten organisations globally, moving businesses to reinforce existing defensive actions against such attacks,” said Jay Irwin, director of the Teradata Centre for Enterprise Security. “Thanks to advances in enterprise security and compliance provided by Dataguise, these concerns can be further minimised. We value our partnership with Dataguise where the company’s data-centric audit and protection platform will add value to Teradata customer defense and data protection strategies.” For further information visit: and

Projects & Agreements

BSI Cybersecurity and Information Resilience partners with Druva BSI Cybersecurity and Information Resilience is partnering with Druva as part of its cloud assurance offering to clients. Druva enhances the cybersecurity consultancy firm’s portfolio of cloud security solutions with its data management-as-a-service platform, which provides simple management and protection for data stored in the cloud, across endpoint devices, within corporate data centres and at remote locations. European businesses have seen a surge in ransomware attacks in recent years as hackers move to monetise their efforts, with many businesses forced to invest heavily to help mitigate the threat. Amongst many products and features, Druva’s technology offers clients a simplified way to restore their data to previous versions in the event of ransomware and malicious malware attacks. With company data now commonly distributed across individual endpoint devices, held in data centres and stored in the cloud, organisations are looking for solutions to help manage how they back up and protect their data. In the event of a ransomware attack, IT teams using Druva technology can simply clear any affected system and restore it back to a ‘known good’ state. With the right user-defined data backup policies, this can be achieved in a matter of minutes for some organisations. For further information visit:

Fujitsu helps keep flights on schedule at Schiphol airport Amsterdam Fujitsu technology is helping keep flights on schedule at one of Europe’s busiest airports. KLM Equipment Services BV (KES) has deployed a unified, high-availability Fujitsu Eternus storage cluster, to double the speed and performance of its critical enterprise applications – ensuring a more efficient and reliable ground support service at Schiphol, Amsterdam’s main airport, ultimately helping flights to operate on schedule. To replace an old storage platform, which was plagued with performance and capacity issues, KES worked with Fujitsu and SELECT Circle Partner SJ-Solutions BV to design and deploy two clustered Fujitsu Storage Eternus DX100 systems at two sites. Designed for maximum data availability and business continuity at a competitive price, the Fujitsu solution provides high performing and scalable data storage for all of KES’ core business functions, including enterprise asset management (EAM), finance, enterprise resource planning (ERP) and other office applications used by 150 KES employees. As well as providing reliability, robustness and performance, the Fujitsu solution has already helped KES lower its energy consumption for storage by 40%, reducing electricity bills, cooling costs, and minimising its environmental impact. For further information visit:

Datawatch Partners with Sage to simplify data migration and end-user data preparation Through a new partnership between Datawatch and Sage, Sage validates the Datawatch Monarch self-service data preparation platform as a solution for its accounting, HR, payroll, payments and enterprise service offerings, enabling Sage and its partners to accelerate product implementations, and end customers to simplify data preparation. Datawatch Monarch enables analysts and business users to overcome traditional data challenges by easily and quickly accessing, manipulating, enriching and combining disparate data from virtually any source (e.g., PDF reports, text files, CSV, JSON), and then quickly preparing it for analysis and reporting – without coding, manual data entry or involvement from IT. “Regardless of department, every employee must now be able find, extract, analyse and share the data that is vital for them to fulfill their job responsibilities and drive the business forward,” said Ken Tacelli, chief operating officer at Datawatch. “Datawatch Monarch gives even novice business users the ability to streamline their accounting and HR processes and access and create trusted datasets for analytics, while upholding data governance. At the same time, implementation service teams gain the ability to quickly and easily manipulate and combine disparate data from virtually any source and prepare it for migration.” For further information visit: and

November 2017 | 41

Projects & Agreements

Datacentreplus completes significant investment programme in data centre Manchester MediaCity-based hosting company Datacentreplus has announced the completion of its power upgrade works, having just finished the last phase of power upgrade works that started back in April 2017. This represents a substantial investment in the business and one that will see it serving its customers well into the future. Datacentreplus owns and manages the data centre itself and has been serving customers since 2015, offering colocation, dedicated servers and other cloud services. It now has a strong customer base and continues to grow. Operations manager, Syed Ali said, “The power upgrade was a long time in the planning and it is with great relief that it was completed on-time and represents a huge investment for us in being able to provide class-leading resilience to our customers”. The works comprised of upgrading the power to the data centre (tripling the amount of power available) and also an upgrade to the latest Riello UPS systems with N+1 backup. Datacentreplus also took the opportunity to add more rack capacity to the data hall as a result of strong customer demand. In addition to state-of-theart UPS systems, the data centre facilities are also fully protected through 24x7 security and resilient communications so that there is no interruption to customer service. For further information visit:

42 | November 2017

DeepL anchors neural machine translator at Verne Global’s HPC-optimised data centre Verne Global has announced that DeepL has deployed its 5.1 petaFLOPS supercomputer in its campus. Designed to support DeepL’s artificial intelligence (AI) driven, neural network translation service, this supercomputer is viewed by many as the world’s most accurate and natural-sounding machine translation service. “For DeepL, we needed a data centre optimised for high-performance computing (HPC) environments and determined that our needs could not be met in Germany. Verne Global’s Icelandic campus provides us with the scalability, flexibility and technical resources we need. In addition, the abundance of low cost renewable energy and free cooling will allow us to train DeepL’s neural networks at lower cost and faster scalability,” commented Jaroslaw Kutylowski, CTO of DeepL. On the supercomputer located within Verne Global’s campus, DeepL trains the neuronal translation networks based on collected data sets. As DeepL learns, the network leverages AI to examine millions of translations and learn independently how to translate with the right grammar and structure. Verne Global’s data centre, located on a former NATO base in Iceland, draws its electricity from hydroelectric and geothermal energy. The cool, temperate climate in Iceland enables free cooling, that when combined with low cost, renewable power, means that companies can save more than 70% on the total cost of operations for their compute resources over less optimal locations within the US, UK and continental Europe. For further information visit:

Cloudian joins Cisco in targeting media and entertainment industry Cloudian has announced that it has expanded its partnership with Cisco. Cloudian’s HyperStore object storage software is now available as part of Cisco’s Media Blueprint, a set of IP-based infrastructure and software solutions that enables organisations to scale premium content production and distribution. The announcement is part of a wider partnership between Cisco and Cloudian in which Cloudian’s HyperStore software has achieved compatibility certification for the Cisco UCS S3260 storage server and integrated UCS Manager. Cloudian’s support for the Cisco Media Blueprint is focused on serving the media and entertainment industry with best-in-class virtualised and cloud based solutions. HyperStore is a software-defined-storage solution that meets the needs of capacity-intensive media formats with seamless scalability – from terabytes to petabytes. Users can start small and expand on demand to grow from a few nodes to a few hundred without disruption. HyperStore also makes it easy for organisations to deploy fully S3-compatible storage on Cisco servers, industry-standard servers, and on integrated Cloudian HyperStore appliances. As a Preferred Solution Partner, Cloudian has achieved Cisco compatibility certification on at least one solution, and can provide its customers with 24-hour, seven-days-a-week customer support. For further information visit:

Projects & Agreements

Scale Computing collaborates with Google Cloud to remove barriers to cloud computing Scale Computing is working with Google to develop a hybrid cloud solution that makes it easy for organisations, including channel partners and MSPS, to move application workloads freely in the cloud or on-premises. The new offering, called HC3 Cloud Unity allows an organisation’s apps to use resources in the cloud and on-prem at the same time, and enables apps solely created for on-prem to now run on Google Cloud Platform. HC3 Cloud Unity combines the private cloud capabilities of Scale’s HC3 hyperconverged platform, SCRIBE software-defined-storage, and new SD-WAN capabilities with Google Compute Engine, the Infrastructure as a Service (IaaS) offering for Google Cloud Platform. HC3 also leverages Google’s recently released nested virtualisation support. With HC3 running both on-premises and in Google Cloud Platform, Cloud Unity creates a virtual LAN that bridges an on-prem local LAN with the private virtual network on GCP. This allows IT organisations to connect to the cloud in real time from their on-premises infrastructure that combines storage, compute, and virtualisation in a single solution. For further information visit:

Instinet selects Interxion for BlockMatch hosting Interxion has announced that it is working with Instinet to host the firm’s MTF, BlockMatch. Building on their long-standing relationship, Instinet will locate the matching engine of its proprietary BlockMatch multilateral trading facility (MTF) into Interxion’s London data centre campus. Using BlockMatch, Instinet’s clients will be able to trade on its matching platform with fair and equal access through Interxion’s cross-connect parity solution. This offers market participants equal connectivity and consistent speed when trading; an important step in preparing for MiFID II. Interxion will also provide Instinet with redundant GPS, Glonass and Galileo time synchronisation signals to enable BlockMatch to use highly accurate clocks to meet its MiFID II microsecond precision timestamping obligations. MiFID II is poised to become the gold standard for trading and market oversight in financial services, demanding firms make major changes to the way they operate, as well as the technology and connectivity infrastructures they depend upon. Accurate timestamping and systems resilience have to be built into a trading venue’s systems. To meet their new regulatory requirements before January 2018, trading venue operators like Instinet also need to be able to demonstrate equal market access, meaning they must provide the same latency for any client that uses a proximity hosting service or colocation cross-connect to access their matching engines.

Sega Games implements Tintri enterprise cloud technology to maintain application performance Sega Games has significantly improved the stability, availability and manageability of its IT infrastructure by implementing enterprise cloud technology from Tintri. Tintri has enabled Sega Games to cure a persistent conventional storage problem that left Sega’s IT team unable to monitor and control resources consumed by Virtual Machines (VMs). In addition, the time spent by the Sega Games infrastructure team on storage operations has been reduced significantly. With more than half of Sega Game’s physical servers used to host virtualisation environments, where 2,000-3,000 VMs are running, quality of service (QoS) is vital to infrastructure stability. Previously, VMs consuming lots of storage I/O resources – or ‘Monster VMs’ – were affecting overall performance of other VMs using the same storage. Sega has purchased Tintri enterprise cloud technology for its primary and disaster recovery sites, with each unit taking only 30 minutes to configure. By moving to Tintri, Sega Games was able to implement its auto-QoS function to automatically monitor storage I/O for each VM, greatly reducing the impact of ‘Monster VMs’ and simplifying management across the entire environment. For further information visit:

For further information visit: and November 2017 | 43


EDP Europe appointed UK distributor for Hubbell Premise Wiring solutions EDP Europe has announced its appointment as the UK distributor for Hubbell Premise Wiring solutions. Offering a full range of copper and fibre optic networking solutions, Hubbell’s products are designed to exceed current standards, enabling its customers to have confidence that the chosen solution will last long into the future. EDP Europe has been specialising in providing the data centre and IT industry with technology solutions for over 27 years, and this latest partnership expands EDP Europe’s product offering in the structured cabling space. Hubbell NEXTSPEED is a range of copper structured cabling systems with products fully factory tested and continuously third party verified to TIA 568-C.2 and ISO/IEC 11801 Class EA component requirements. Hubbell is also meeting new CPR regulations with its Cat 6U/UTP, Cat 6A F/FTP and Cat7 S/FTP installation cables achieving B2ca, Cca and B2ca classes respectively. As well as copper and fibre installation cables, Hubbell offers a full range of products including patch cords, jacks, grounding kits, patch panels, modules and faceplates. NEXTSPEED components are backed by Hubbell’s 25-year applications assurance warranty. Speaking of this new partnership, EDP Europe’s managing director Damian Stackhouse said, “As EDP Europe looks to continue its expansion into the structured cabling market, establishing a partnership with a leading manufacturer like Hubbell is vital. Their product range is extensive and offers our clients excellent performance for their networking infrastructure requirements while also achieving compliance with currently evolving cabling standards”. Stuart Mcllroy, the European business manager for Hubbell Premise Wiring said, “Hubbell is delighted to be partnering with EDP on this new joint venture. EDP has vast distribution experience within the data centre and structured cabling sectors and we look forward to working closely with them as we expand in the UK marketplace with structured cabling solutions that have been engineered to meet the new CPR regulations”. For further information visit:

Reichle & De-Massari introduce 1U 48 port panel R&M has further increased the packing density of copper cabling in its 19in racks, with a new patch panel that accommodates 48 ports of Cat 6A ISO in a single height unit (1U). RJ45 modules can be unlocked on the front and besides modules for copper cabling, the new 48-port patch panel can also accommodate fibre-optic connections. R&M provides adapters for LC Duplex and SC Simplex adapters. The panel is suitable for data centres and highly densified distributors in structured cabling systems. Shielded and unshielded versions are available and a reusable plastic holder with flexible lugs can be attached to the back carrier in next to no time for fast, easy cable routing. This does away with the need for cable ties and also reduces the time involved in maintenance jobs. Coloured labels and security modules fit in the plastic front for error-free management of patch panels. R&M inteliPhy infrastructure management system sensor strips may also be attached. The blind covers can be turned and reused together with label fields, with the front also offering space for the customer’s company logo. For further information visit:

44 | November 2017


The newest 25GbE data centre switch from Edgecore Networks Edgecore Networks has announced AS7312-54XS - the 25GbE high performance top-of-rack (TOR) or spine data centre switch, designed for high performance computing clusters and high-frequency trading applications, as well as highly virtualised cloud environments and network provider companies. The Edgecore ECIS4500 series is managed industrial gigabit Ethernet switches for upgrading the existing Fast Ethernet network infrastructure to full gigabit Ethernet networks and provides a higher bandwidth than the legacy Fast Ethernet networks, reducing the response time for time sensitive applications. With powerful features, the ECIS4500 series is easy to deploy and manage, providing reliable and quality service for growing network traffic demand. Industrial hardened design for harsh environments • The ECSI4500 series is equipped with IP30 metal case design for industrial harsh environment conditions. • It comes with DIN rail or wall mount options in plant or cabinet install space. • It operates at wide temperature range from -40°C to 75°C to meet demanding conditions. Continuous availability • Support fast failover protection rings (ERPS) for the network to detect and recover from incidents without impacting users. • Rapid recovery time when problems do occur (<20ms). • Network redundant LACP, spanning tree STP, RSTP & MSTP • Dual DC power Input and reverse power detection.

Power over Ethernet for network device • ECSI4500 series features IEEE802.3af/at to provide power to growing network device with PoE demand such as surveillance camera, IP phone and wireless AP. • Support PoE power scheduling for power management. User can set the time interval that the switch can supply power to PDs and power off the PDs. • ECIS4500-8P2T4F supports Ultra PoE (60W) for Port 1 and Port 2 For further information visit:

Delta’s 500kVA UPS sets world’s power density record with 55.6kVA per 3U module Delta Electronics has announced the global launch of its 500kVA Modulon DPH series uninterruptible power supply (UPS). Delta’s new state-of-the-art online double conversion UPS boasts the world’s highest power density of 55.6kVA per 3U module. Thanks to its modular design, the UPS enables advanced control of power module redundancy as well as the ability to add capacity and pay-as-you-grow scalability. The Modulon DPH 500kVA UPS is the latest in its series, which includes 75, 150, and 200kVA models as well. The new high-density UPS from Delta launches at a time when annual global IP traffic is expected to nearly triple over the next five years. Developments such as the increase of content-heavy applications such as bandwidth-intensive video, the Internet of Things (IoT), and big data are behind these dramatic increases in traffic and the corresponding demand for greater data centre capacity. The power density of 55.6kVA per 3U module achieved by Delta’s new 500kVA Modulon DPH model gives data centre operators greater flexibility to adapt to these rapidly changing requirements. Adding capacity later is simple and economical, in contrast with traditional monoblock UPS systems, which require installing enough power for the data centre’s maximum planned load right from the start. The high power density also means less space is consumed by power infrastructure, leaving more room for revenue-generating IT racks. Highlights of the fault-tolerant design of the 500kVA Modulon DPH series UPS include power module redundancy and a dual CAN bus, onboard control logic, and selfsynchronisation. An added benefit of the adaptable modular design is that critical components are hot-swappable. Parallel expansion and N+X redundancy with up to eight units are possible. Furthermore, the system’s hot-swappable architecture offers a 50% faster service time than required compared to traditional UPS systems. For further information visit:

November 2017 | 45

final thought

Livin’ on the Edge In today’s world there is no question our need for instant gratification is growing ever greater. Detlef Spang, CEO at Colt Data Centre Services, explains why time is of the essence when it comes to data and what organisations can do to help keep pace.


echnology is changing everything – even the way we compute things in our brains. With unlimited information at our fingertips or in our pockets, we no longer need to exercise the part of our mind responsible for remembering facts and phone numbers. Our attention spans have been slashed by multi-screen distractions. And as consumers, we’re losing our ability to delay gratification – we

46 | November 2017

want a great customer experience, first time and every time. But while our minds have quickly adapted to new technology, the ‘brainpower’ behind these new digital services have failed to evolve as quickly. Technology infrastructure is still highly centralised, with the processing (or ‘thinking’) concentrated in giant data centres rather than where it’s actually needed, such as in the device in the palm of your hand. The result may only be a delay of a few fractions of a second, but it can easily be enough to cause irritating jitters and glitches in an application, causing considerable damage to the user’s experience and to the brand itself.

Some things never change The first computers represented a revolution in our ability to process information and complex calculations. Naturally, these roomsized machines resulted in a highlycentralised network architecture, a pattern that has changed little over the course of ‘Big Iron’ and hyperscale data centres. Yet times and technology have changed. For example, we now have devices that fit into our pockets where such calculations occur a million times a second. Then there’s the highlyanticipated autonomous vehicle which is expected to generate approximately four terabytes of

final thought

data per day. That’s the same level of data consumption as 3,000 people combined. The amount of sensor data and critical local processing power required to run these machines will be so great that they become mini data centres themselves.  But speed is – and always has been – key, no matter the era or the size of the facility. And this is because almost every company in existence today requires nearinstant access to data in order to be successful.

The edge of tomorrow What has changed significantly over time is the data centre model, a shift driven by an aggressive rise in the number of data-driven endpoints. Earlier this month, Gartner released its 2017 emerging technologies Hype Cycle, a collection of megatrends that it predicts businesses will connect with in order to thrive in the digital economy over the next five to 10 years. It was no surprise to see edge computing as one of the key platform-enabling technologies to track over this timeframe. In a statement following the Hype Cycle’s release, Mike Walker, research director at Gartner, said, “When we view these themes together, we can see how the human-centric enabling technologies with transparently immersive experiences – such as smart workspace, connected home, augmented reality, virtual reality and the growing brain computer interface – are becoming the edge technologies that are pulling the other trends along the Hype Cycle.” This explains why many are predicting edge computing to be the next multi-billion dollar market. Every single internet-connected endpoint (and there will be 24 billion of them by 2020) generates

“A delay can cause considerable damage to the user’s experience and to the brand itself.”

data, which means that behind every connected ‘thing’ must exist a data centre for it to function. For service providers and endpoint manufacturers this is tricky business because satisfying ‘full demand’ or ‘the customer experience’ requires 0% latency and 100% uptime. If an autonomous vehicle is unable to send information up to the mothership (i.e. the cloud) for data analysis and back down again in a matter of milliseconds, the outcome could be fatal. In banking, if a compliance officer is unable to quickly process and analyse data from a fraudulent transaction on a customer account, the lag time decreases the value of that data. Not only could this impact the bank’s bottom line, it’s likely to negatively impact the customer relationship. Edge computing solves this problem. When endpoints, applications and data are moved from centralised points to the outer layers of a network, the distance between the user (e.g. a passenger in an autonomous car) and the data narrows. It makes the delivery of crucial information seamless

and almost instantaneous which, in turn, vastly improves the experience received.  

The time value of data The data that an organisation has in any given moment loses value as each second ticks by. It will never mean the same in an hour’s time as it does now. This is why, with the proliferation of IoT, application and senor data, businesses are starting to double-down on edge computing strategies for real-time data analysis. This doesn’t mean the end is nigh for large, centralised server farms. Not all processes (i.e. billing) are bandwidth and latency-driven. These facilities will continue to play a huge role in low-level processing, backup and historical data analysis. But for organisations that rely on fast and actionable insights, analytics at the edge will increasingly be the difference between a great customer experience or a poor one. For those that are conscious of their brand experience  edge is the obvious solution. November 2017 | 47

Data Centre News is a new digital news based title for data centre managers and IT professionals. In this rapidly evolving sector it’s vital that data centre professionals keep on top of the latest news, trends and solutions – from cooling to cloud computing, security to storage, DCN covers every aspect of the modern data centre. The next issue will include special features examining modular systems and UPS in the data centre environment. The issue will also feature the latest news stories from around the world plus high profile case studies and comment from industry experts. REGISTER NOW to have your free edition delivered straight to your inbox each month or read the latest edition online now at…

DCN November 2017  
DCN November 2017