Page 1


data centre news

June 2018

Over the edge? Bruce Kornfeld, CMO and SVP product management, StorMagic discusses why data centre and cloud IT solutions don’t work at the Edge.

inside... Special Feature Big Data and the Internet of Things

Case Study

Angel Trains on the right track with critical infrastructure from Schneider Electric

Final Thought

Clive Partridge of Rittal discusses data centre trends and what really matters to IT managers

No Compromise. The only intelligent PDU you will ever need. Learn More

data centre news

Editor Claire Fletcher


packed issue this month taking in some of the most fast paced and crucial sectors affecting change in the market right now. Looking at the world of Big Data and IoT, Martin Ewings, director of specialist markets at Experis, outlines six key skills employers look for in Big Data hires, whilst Jason Kay, CCO at IMS Evolve, specialist in scalable IoT and machine management solutions, discusses how businesses can best utilise digitisation, IoT and automated outcomes to improve multiple core business objectives. Also speaking out here is James Petter, VP, EMEA at Pure Storage, who discusses how to build a data centric architecture for successful AI adoption, as artificial intelligence explodes onto the business scene. Lastly here, Joe Nagy, global director, product marketing and strategy at Software AG, examines eight ways to make the most of IoT. Another big subject tackled this month is Edge Computing, as Bruce Kornfeld, CMO and SVP product management, StorMagic discusses why data centre and cloud IT solutions don’t work at the Edge. On the network security side of things Andrew Lintell, regional

Sales Director Cities are getting smarter, but how smart?

Daniel J Sait, Editor in chief

vice president, northern EMEA at Tufin, discusses the five network security pitfalls that could be putting your organisation at risk. The rise of the smart city is also covered as cities like London are leading to a new era of city design, as well as a new era in data centres. Greg McCulloch, CEO, Aegis Data, examines how exactly will they need to change? Other regulars included this month include our Final Thought section where Clive Partridge, product manager IT Infrastructure at Rittal, discusses data centre trends and what really matters to IT managers and Case Study where Angel Trains selects Schneider Electric’s critical infrastructure to ensure its data centre stays on track. Andy Wren, head of IT services at Angel Trains, explains how. Finally this month, the Meet Me Room is populated by Jason Collier, co-founder of Scale Computing. Describing himself as the ‘human equivalent of a swiss army knife’, Jason is a man of many talents, specialising in business development, fund raising, mergers and acquisitions, sales, sales engineering, network architectures and compute cluster development to name but a few. Jason has taken the time to give DCN his industry insight, as well as what he expects the future will hold.


Ian Kitchener – 01634 673163

Studio Manager Ben Bristow – 01634 673163

EDITORIAL COORDINATOR Jordan O’Brien – 01634 673163

Designer Jon Appleton

Business Support Administrator Carol Gylby – 01634 673163

Managing Director David Kitchener – 01634 673163

Accounts 01634 673163

Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173

The editor and publishers do not necessarily agree with the views expressed by contributors, nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2018. All rights reserved.

June 2018 | 3



in this issue…

data centre news

June 2018

Over the edge? Bruce Kornfeld, CMO and SVP product management, StorMagic discusses why data centre and cloud IT solutions don’t work at the Edge.

inside... Special Feature Big Data and the Internet of Things

Regulars 03 Welcome

Packing them in!

06 Industry News

A  re cyber criminals losing interest in ransomware?

10 Centre of Attention

I  s Azure Stack helping fill the hybrid cloud skills gap? Simon Hendy of Pulsant investigates.


Meet Me Room J  ason Collier of Scale computing gives us his industry insight and what he expects the future will hold.


June 2018

Case Study

Angel Trains on the right track with critical infrastructure from Schneider Electric

Final Thought

Clive Partridge of Rittal discusses data centre trends and what really matters to IT managers

features 38 Projects and Agreements

C  olt Data Centre Services pledges support to the Tech She Can charter.

44 Company Showcase

F  ind out how Western Digital is driving capacity and improving TCO for cloud and enterprise data centres.

46 Final Thought

C  live Partridge of Rittal discusses data centre trends and what really matters to IT managers.


Case Study A  ngel Trains is on the right track with critical infrastructure from Schneider Electric.

30 Smart Cities

H  ow will data centres need to change to accommodate the rise of smart cities? Greg McCulloch of Aegis Data gives us his insight.

32 Swarm Computing

P  avel Bains of Bluzelle discusses ‘swarming’- storage for the decentralised era.

34 Network Security

T  he five network security pitfalls that could be putting your organisation at risk, as discussed by Andrew Lintell of Tufin.

36 Edge Computing

B  ruce Kornfeld of StorMagic discusses why data centre and cloud IT solutions don’t work at the Edge.

4 | June 2018

SPECIAL FEATURE: Big Data and the Internet of Things

Craig of Riello UPS discusses how data could be 20 Leo the solution to help the NHS tackle the country’s future

Ewings of Experis outlines six key skills employers 22 Martin look for in Big Data hires.

healthcare needs.

Petter of Pure Storage 24 James discusses how to build a datacentric architecture for successful AI adoption.

Nagy of Software AG examines 26 Joe eight ways to make the most of the Internet of Things.

Kay of IMS Evolve outlines 28 Jason how your business can best utilise digitisation, IoT and automated outcomes to improve multiple business objectives. June 2018 | 5

industry news

‘Fake data’ will make banks vulnerable, according to Accenture A new report from Accenture has found that many banks have not invested in the capabilities to verify the validity and accuracy of their data. According to the report, Banking Technology Vision 2018, banks have always held a large volume of confidential data and are increasingly adding data from external, unstructured sources. However, while more than nine in 10 (94%) of the bankers surveyed said they are confident in the integrity of the sources of their data, the report found that half of the bankers aren’t doing enough to validate and ensure data quality: 11% trust their data is reliable, but don’t validate it; 16% try to validate their data, but aren’t sure of the quality; and 24% validate the data, but recognise they should do a lot more to ensure the quality. In addition, while five in six bankers (84%) said they increasingly use data to drive critical and automated decision-making, more than three-quarters (78%) of those surveyed believe that these automated systems create new risks, such as fake data, external data manipulation and inherent bias. “Inaccurate, unverified data will make banks vulnerable to false business insights that drive bad decisions,” said Alan McIntyre, senior managing director and head of Accenture’s banking practice. “Banks can address this vulnerability by verifying the history of data from its origin onward -understanding the context of the data and how it is being used - and by securing and maintaining the data. Given that four in five bankers that we surveyed said they are basing their most critical systems and strategies on data, it’s critical that the data can be verified and validated.” Accenture, 6 | June 2018

Airports are ill-equipped to deal with a major cyber attack, PA Consulting Group’s report reveals PA Consulting Group’s latest research found that airports are ill-equipped to deal with a major cyber attack. The report, ‘Overcome the silent threat’, says that the emergence of a hyper-connected model - where passengers in airports want fast internet and digital engagement with airlines and retailers – is increasing the opportunities for cyber criminals to exploit. There are currently around 1,000 cyber attacks each month on airport and aviation systems worldwide, according to the European Aviation Safety Agency, and according to PA’s research, airports are at a higher risk of cyber attack due to an increasing use of technologies and digital infrastructure in day to day operations, new data sharing obligations and greater connectivity across staff and passenger devices within airports. David Oliver, global transport security lead at PA Consulting Group, commented, “With the EU Network and Information Systems Directive, which aims to improve the cyber resilience of the UK’s essential services now in force, UK airports risk penalties of up to £17 million for failing to put in place appropriate cyber security measures.” PA Consulting,

85% of IT professionals think their organisation’s primary IT vendor can improve its service Customer service is king for IT decision-makers when it comes to the key checklist of what they look for in an IT vendor, according to a new study by Cogeco Peer 1, a global provider of enterprise IT products and services. The vast majority of respondents (85%) believe that their organisation’s most prominent current IT vendor can improve its service; with only 14% stating that they are satisfied with the current service they receive. Furthermore, over seven in 10 respondents stated that cost (75%) or service (71%) are in their top three factors that matter the most to their organisation when looking for an IT vendor, with around a quarter (22%) saying that service is the most important factor. The study, which surveyed 150 IT decision-makers across several different industries including financial services, retail, higher education, business services and media, looked into the real value of service and found that when it comes to pricing, businesses are not necessarily looking for those providers willing to engage in a race to the bottom. Although price does remain the main concern, and with budgets tightening in light of uncertainty around Brexit, businesses are increasingly recognising the need for strong scalability capability, to cope with the peaks and troughs they experience. Cogeco Peer 1,

industry news

UK manufacturing is top target for cyber attackers Manufacturing has become the most attacked industry sector in the UK, representing almost half (46%) of all cyber attacks in 2017 – more than double that of attacks on manufacturing across EMEA. This is according to the 2018 Global Threat Intelligence Report (GTIR) from NTT Security. The majority of attacks on UK manufacturers came from China, representing 89% of attacks on this sector. Technology organisations, in second place, were the target of 23% of attacks in the UK, with business and professional services in third place with 10% of attacks. While the finance industry was the most attacked sector worldwide with almost a quarter (23%) of all attacks, up from 14% in 2016, it was fourth in the UK with 8%, followed by government at 5%. NTT Security analysed data from over 6.1 trillion logs and 150 million attacks for the GTIR, highlighting global and regional threat and attack trends

based on log, event, attack, incident and vulnerability datafrom NTT Group operating companies. China was the number one source of attacks against all sectors in EMEA during 2017. EMEA was the only region in

which attacks from US sources fell behind Chinese sources, whereas in 2016 China was the ninth most prominent attack source, accounting for less than 3% of all attacks against EMEA. NTT Security, June 2018 | 7

industry news

New research from Schneider Electric: How higher chilled water temperatures can improve DC cooling A new white paper from Schneider Electric, #227, ‘How higher chilled water temperature can improve data centre cooling system efficiency’, outlines the various strategies and techniques that can be deployed to permit satisfactory cooling at higher temperatures, while discussing the trade-offs that must be considered at each stage, comparing the overall effect of such strategies on two data centres operating in vastly different climates. It also details two real-world examples of data centres in differing climates; the first is in a temperate region (Frankfurt, Germany) and the second in a tropical monsoon climate (Miami, Florida). In each case, data was collected to assess the energy savings that were accrued by deploying higher CHW temperatures at various increments, while comparing the effect of deploying additional adiabatic cooling. The study found that an increased capital expenditure of 13% in both cases resulted in energy savings of between 41% and 64%, with improvements in TCO between 12% and 16% over a three year period. Additionally, it found that by reducing the amount of energy spent on cooling, each facilities PUE was thereby improved. Overall, the Schneider Electric study found that PUE was reduced by 14% in the case of the Miami data centre and 16% in the case of Frankfurt. Schneider Electric,

Global IT services contracts value continued to decline in 2017, finds GlobalData IT services deals experienced a steep decline in 2017, both in terms of the number of deals and total contract value (TCV), compared to both 2016 and 2015. While the TCV witnessed a significant annual decline of 33.3% in 2017 to reach a value of US$61.4 billion, the number of deals announced (4,099) saw a considerable decrease of 25.6% in 2017 compared to 2016, according to GlobalData. The average contract value also took a beating in 2017 compared to the previous two years. However, the average contract duration experienced a slight increase (2%) in 2017 compared to 2016, which shows that companies are still willing to enter into long term contracts with IT services providers. Application outsourcing contracts were at the forefront of the total number of deals signed in 2017, accounting for 38.4%. With respect to the TCV of the deals, infrastructure outsourcing contracts dominated the IT service contracts with a TCV of $31.8 billion. North America led in terms of the TCV of the contracts announced in the infrastructure outsourcing segment, with a TCV of $16 billion, followed by Europe with a TCV of $10.6 billion. GlobalData,

Ransomware ‘gold rush’ looks finished, but threat remains A new F-Secure report finds that ransomware attacks exploded in 2017 thanks to WannaCry, but a decline in other types of ransomware signals a potential shift in the malware’s use by cyber criminals. Ransomware attacks grew in volume by over 400% in 2017 compared with the previous year. F-Secure attributes this growth to the WannaCry cryptoworm in a new report, but notes that other ransomware attacks became less common as the year progressed, signaling a shift in how cyber criminals are using the malware. ‘The Changing State of Ransomware’ report finds that ransomware evolved as a threat considerably during 2017. Prevalent threats during the year included established ransomware families like Locky, Cryptolocker, and Cerber. But it was WannaCry that emerged as the most frequently seen ransomware threat in 2017: the notorious cryptoworm accounted for nine out of every 10 ransomware detection reports by the end of the year. But while the WannaCry ransomware family remained prevalent in the second half of 2017, the use of other ransomware by cyber criminals seemed to decline. It’s a phenomenon that F-Secure security advisor Sean Sullivan says points to amateur cyber criminals losing interest in ransomware. F-Secure,

8 | June 2018

industry news

Gartner survey reveals the scarcity of current blockchain deployments Only 1% of CIOs indicated any kind of blockchain adoption within their organisations, and only 8% of CIOs were in short-term planning or active experimentation with blockchain, according to Gartner’s 2018 CIO survey. Furthermore, 77% of CIOs surveyed said their organisation has no interest in the technology and/or no action planned to investigate or develop it. “This year’s Gartner CIO Survey provides factual evidence about the massively hyped state of blockchain adoption and deployment,” said David Furlonger, vice president and Gartner Fellow. “It is critical to understand what blockchain is and what it is capable of today, compared to how it will transform companies, industries and society tomorrow.” David added that rushing into blockchain deployments could lead organisations to significant problems of failed innovation, wasted investment, rash decisions and even rejection of a gamechanging technology.

Among 293 CIOs of organisations that are in short-term planning or have already invested in blockchain initiatives, 23% of CIOs said that blockchain requires the most new skills to implement of any technology area, while 18% said that blockchain skills are the most difficult to find. A further 14% indicated that blockchain requires the greatest change in the culture of the IT department, and 13% believed that the structure of the IT department had to change in order to implement blockchain. “The challenge for CIOs is not just finding and retaining qualified engineers, but finding enough to accommodate growth in resources as blockchain developments grow,” said David. “Qualified engineers may be cautious due to the historically libertarian and maverick nature of the blockchain developer community.” Gartner,

Blockchain adoption, worldwide Source: Gartner (May 2018)

Cloud computing top transformative technology and the main cause of mounting performance challenges SolarWinds has released the findings of its annual state-of-the-industry study, revealing the state of today’s IT landscape. IT professionals are continuing to prioritise investments in cloud computing as they grapple with how to leverage the benefits of emerging technologies, such as artificial intelligence (AI) and machine learning. The IT Trends Report 2018: ‘The Intersection of Hype and Performance’, the company’s annual report on IT trends, explores the spectrum of today’s existing and emerging technologies, and the extent to which they are disrupting IT and optimising the performance of environments as organisations progress in their digital transformation journeys. Overall, UK IT professionals are prioritising investments in cloud computing and hybrid IT more than any other technology; 95% of respondents indicate that cloud/ hybrid IT are one of the top five most important technologies to their organisation’s IT strategy today. However, the rapid adoption of new technologies has created environments that are not optimised for peak performance. Nearly half (42%) of the survey respondents indicate their environments are not optimised and reported spending 50% or more of their time reactively maintaining and troubleshooting. In increasing numbers, IT professionals appear to be addressing the challenges introduced by the cloud and hybrid IT through investment in containers: 49% of respondents ranked containers as one of the most important technology priorities today. SolarWinds,

June 2018 | 9

centre of attention

Perfect partners It’s no secret partnerships can help to fill the hybrid cloud skills gap, but according to Simon Hendy, channel manager at Pulsant, the emergence of Azure Stack is making this a whole lot easier.


here hybrid cloud evangelists see the solution as a winwin situation, other sectors of the industry see it as a compromise too far. But this is nothing to do with the customers migrating to a hybrid cloud environment; rather it’s the concern of those who should be selling it – the channel. While there are many resellers who are more than capable of selling both hardware and software, there are others who are reluctant to enter new, unfamiliar territory. On the one hand there are those who, until now, have been able to make their money selling hardware. Typically, these are concerned with profit margins per unit, and see the world of services and monthly consumption a shaky place to be. There may even be some who see the cloud as a load of hot air. On the other hand, are those businesses founded in the cloud era and usually working to a model based on the cloud. They are at home with consumption modelling and working in an environment where certain assumptions have to be made. However, when they see a solution as compelling as Azure Stack coming on the market, it amplifies their lack of hardware

10 | June 2018

experience and relationships. Selling hardware is a different world, demanding logistics and transportation which in their cloud based existence they have never needed. But the fact is, neither of these groups hold the key to future success. This will go to those who can adapt to the evolving landscape around them. With the launch of new technologies, especially the new breed of hybrid cloud platforms, they have to find a way around their perceived barriers or else prepare for downward slide. So, at either end of the spectrum it can be a real struggle. But there is still time for change. The market in general is gravitating towards a hybrid existence, and hybrid models are becoming more attractive for a variety of reasons. One of main drivers is increasingly stringent industry regulations, that require organisations to be aware of where their data is hosted, and that permissions and security are in place to safeguard that data. Vendors have noticed this trend too, and are doing all they can to market hybrid as the future. The channel needs to sit up and take notice, otherwise they may well be left behind. Yes, the cloud is important, and the opex over capex argument can’t be denied;

centre of attention

“Seeking out a services provider who can fill the gaps in knowledge would certainly put any business back in the running.”

but with companies wanting to recover some of their assets back on premise, the popularity of a hybrid environment shows no sign of abating. So, what is the answer for those businesses wondering if they have the right skills to take advantage? We’re all aware of the value of partnerships, especially in the IT world. Seeking out a services provider who can fill the gaps in knowledge and building up a longer-term partnership would certainly put any business back in the running. Then, Azure Stack may just be the silver bullet they are after. It is a seamless, single development and delivery platform that allows the channel to deliver Azure services from their own data centre, in a way that is consistent with the public Azure they, and their customers, will no doubt be familiar. Services can be developed in public Azure and seamlessly moved over to Azure Stack and vice versa, saving time, expense and making operations a lot more consistent. By taking this first step, a reseller business can start to build their hybrid cloud experience albeit with assistance. Moving their offerings towards hybrid cloud adoption, means addressing the needs of customers and the chance to explore new market opportunities and add new revenue streams. By all accounts, Azure Stack looks set to revolutionise the cloud market. It’s time for the channel to take advantage – whether their business comes from a hardware or cloud-based tradition. Pulsant, June 2018 | 11

Energy and power monitoring solutions for data centres

ET272 Self-addressing energy transducer

WM50 + TCD12 Modular main and sub metering for PDUs

DEA71 / DEB71 Earth leakage monitoring relays

Carlo Gavazzi UK Ltd. - 4.4 Frimley Business Park, Frimley, Camberley, Surrey GU16 7SG Tel: 01276 854 110 -

meet me room

Jason Collier – Scale Computing Describing himself as the ‘human equivalent of a swiss army knife’, Jason Collier, co-founder of Scale Computing, is a man of many talents, specialising in business development, fund raising, mergers and acquisitions, sales, sales engineering, network architectures and compute cluster development to name but a few. Jason has taken the time to give DCN his industry insight, as well as what he expects the future will hold. Looking back on your career so far, is there anything you might have done differently? I wouldn’t do anything differently. My mantra is, ‘good judgement comes from experience and experience comes from poor judgement’ – we all make poor decisions in hindsight, but my choices have helped me to become the person I am today and guided me towards achievements I am really proud of, like cofounding Scale Computing with Jeff Ready. Can you tell us about any projects you are currently working on? I’m currently working on a fun project at home! In the US, autofill systems for domestic swimming pools are a bit naff. So, I am working on building my own system, which utilises Raspberry Pi and cameras to control a fill/drain pipe in the pool. Essentially it is AI at home – using cameras to determine the pool level and Google TensorFlow (AI on open source) to analyse images and make a decision on whether the pool needs to be filled or drained.

Which major issues do you see dominating the data centre industry over the next 12 months? Artificial Intelligence (AI) in the data centre will be a big topic for many years to come and I am not sure there will be a solution within the next 12 months. I think if I had to pick on dominating topic for the next year, it will be about making data centres more efficient. Data

centres don’t have the bandwidth capacity to deal with the amount of Internet of Things (IoT) devices in operation – so there will be a focus on how data is distributed from the data centre through to the edge and then to IoT devices. Having some type of edge computing component to mitigate this demand will be at the forefront of data centre management conversations.

“Artificial Intelligence in the data centre will be a big topic for many years to come.”

In addition to earning a living, how else has your career created value in your life? I love what I do – my second mantra is, ‘love what you do and you’ll never work a day in your life’. I cannot imagine doing anything else. June 2018 | 13

meet me room

How would you encourage a school leaver to get involved in your industry? What are their options? Technology is expanding and there is not a single industry or person today that is not touched by it in some way. It is continuously evolving and each day can be different, which opens up a very successful career path for anyone interested. In terms of options, there are lots of routes to take. Many people choose to go to university and do a computer science course, but there are other options. We always look for experience, some people may not have the academic qualifications, but are certificated to use certain technologies and this can be just as meaningful. What part of your job do you find the most challenging? For me, it would have to be the travelling, sometimes I can feel disconnected when I’m away for several days at a time. More often than not, I’m in a remote location working away from the office. It’s easier to collaborate and stay in sync when you can meet in person. 14 | June 2018

As a child, Jason always wanted to be a fighter pilot.

“Love what you do and you’ll never work a day in your life.”

Jason is busy working on a project which allows him to utilise AI at home.

What are your company’s aims for the next 12 months? We are always focused on growth and meeting customer and partner needs. This year our focus is on edge computing, enabling customers to modernise the traditional data centre and meet new demands around IoT and AI. Where is your favourite holiday destination and why? Every year my wife and I get together with our friends and head up to Sonoma County in Northern California’s wine country. We have

been doing this trip for years and it is great to get together with friends and decompress. Can you remember what job you wanted when you were a child? I always wanted to be a fighter pilot, I was accepted into the United States Air Force Academy but unfortunately I didn’t pass the eye test. What is the best piece of advice you have ever been given? ‘Good judgement comes from experience and experience comes from poor judgement.’

More than just a badge: Your industry trade association carries more weight than certification alone. Offering membership to businesses achieving the highest industry standards and certifications in all aspects of electrical and electrotechnical design, installation, testing, maintenance and monitoring.

0207 313 4800

@ECAlive *Terms & conditions apply and are subject to change ** Membership eligibility and the stated 25% discount off initial 12 months membership of the ECA is subject to continued registration of qualifying schemes through accepted certification bodies or organisations. All rights reserved. Registered in England: Company Number 143669. Covering England, Wales & NI.

Shaping our industry. Enabling business growth. Getting you specified.

case study

On the right track Angel Trains has selected Schneider Electric’s critical infrastructure to ensure its data centre stays on track. Andy Wren, head of IT services at Angel Trains, explains how.


ngel Trains is one of Britain’s leading train leasing companies, providing rolling stock to several of the UK’s largest Train Operating Companies (TOCs) including Virgin Trains, Abellio Greater Anglia, Arriva Rail North, Great Western Railway and South Western Trains. The company owns some 4,000 plus rail vehicles which it leases to operators, on terms that are generally coterminous with franchises granted by

16 | June 2018

the Department of Transport. “We’re a big-ticket asset leasing company,” says Andy Wren, head of IT services. “We have an intricate business model, and our IT systems that support it are similarly complex.” Angel Train’s IT department comprises eight people, including Andy Wren, and is based at the London HQ where the corporate data centre is located. Andy is responsible for leading the entire IT services function including application development,

software procurement, support for users and management of the data centre infrastructure that underpins it all. The key IT systems operated by Angel Trains are its assetmanagement system, a bespoke application developed in-house that manages the company’s inventory of rolling stock, and Oracle Business Suite, which comprises the financial stack of software including accounts receivable, general ledger and invoice-management.

case study

“Together those two applications are the most business-critical, being responsible for managing our revenue generation and collection, on which the business depends on,” says Andy. “However, we also run several other Microsoft server-based applications such as Sharepoint for our content and document management.”

Agility is key The key priorities for the data centre, in which the applications are hosted, are agility, reliability and cost effectiveness. Although the company tries to standardise leasing contracts with its customers, the reality is that each agreement has an element of customisation with consequent demands on the IT department’s development effort. An implication for the data centre is that it must have the agility to scale up capacity to accommodate additional servers, should they become necessary to meet customer requirements. For reliability, the data centre has a South London-based disaster recovery (DR) site to which all its data is replicated and securely backed up. Driving all the IT investment decisions, however, is the perennial need to keep costs low, while maintaining a consistently reliable level of service. Angel Trains’ IT department enjoys the convenience of being able to run its own systems onpremise, but IT management are cognisant of the fact that thirdparty service providers can offer hosting services from remote sites at competitive prices. The ownership, control and speed of connectivity from the on-premise solution has many benefits for the company, of which one is avoiding any latency issues, particularly with large files.

Owning versus outsource “As an internal IT department, we are comfortable with having the ability to monitor our own infrastructure and IT equipment, rather than have a third-party managing it on our behalf at another location,” Andy says. “We investigated a number of different hosting partners, and found there is a great variety of services available, however, it was more cost effective to own and manage our own on-premise data centre.” With this in mind, the company decided to continue to operate its data centre in-house, with the help of a maintenance and support contract with APC by Schneider Electric Elite Partner, Comtec Power.

Its data centre comprises a rack-based containment system, with critical power protection provided by Schneider Electric Symmetra PX UPS units. For additional resilience, there is a dual power feed running direct from the mains and an emergency backup power generation unit on-site. With key challenges that included cost effectiveness, reliability and footprint, in terms of space, Angel Trains chose to adopt Schneider Electric’s ISX Pod architecture with InRow cooling for its data centre.

The mobile dashboard for Schneider’s EcoStruxure IT

APC by Schneider Electric

Resilience from start to finish Angel Trains has been utilising Schneider Electric UPS systems for ten years, attracted initially by what its UPS products offered in terms of flexibility, with the ability to perform ‘hot swaps’ of components such as batteries and power controllers. June 2018 | 17

case study

“The data centre needed to have the agility to scale up capacity to accommodate additional servers, should they become necessary to meet customer requirements.” “Once we were introduced to Schneider Electric’s on-demand InfaStruxure solution with InRow cooling, we knew that was exactly the type of architecture we wanted to move forward with,” comments Andy. “We needed to make the new data centre as cost effective, scalable and robust as possible and the Schneider Electric racks and Symmetra UPS systems hit the mark in terms of resilience and efficiency - whilst helping us to optimise the fairly confined space in our data centre.” Ultra-efficient cooling is provided by a combination of external chillers and condensers located on the roof of the building, in addition to the InRow DX systems deployed within the Pod. The facility is also managed using Schneider Electric’s StruxureWare for DCIM (Data Centre Infrastructure Management) software, part of the EcoStruxure for Data Centres Solution. 18 | June 2018

Expertise in data centre services As well as provisioning and installing much of the infrastructure equipment in the data centre, Comtec Power continues to provide monitoring and maintenance support in collaboration with Angel Trains’ IT staff. “When first searching for a partner, Comtec engaged with us far more than other potential suppliers,” adds Andy. “They had a wealth of expertise and understood both our challenges and drivers, in addition to being very flexible and competitively priced.” Ongoing support provided by Comtec includes taking responsibility for rapid response to any faults in the infrastructure equipment, such as failures in airconditioning units, including fans, and UPS battery malfunctions. “Our team can handle some of the smaller tasks internally,” Andy continues, “but under our maintenance service agreement, Comtec can proactively monitor and react to any faults within our core data infrastructure.” As part of a recent upgrade to the standard maintenance agreement, Angel Trains has recently connected the data

centre infrastructure components to Schneider Electric’s EcoStruxure IT monitoring solution, which was previously known as StruxureOn. This delivers detailed 24/7 monitoring and critical insights straight to the user’s mobile phone as well as Comtec’s engineering team. “Through the remote diagnostics, Comtec can engage quickly to begin fixing issues whilst proactively avoiding any serious situations or downtime from developing. We chose Comtec because they have the most experience. They built the system and we are comfortable operating in partnership with them as our trusted advisors.” “Having a strong working relationship with long term partners such as Schneider Electric and Comtec Power has provided Angel Trains with the advice, skill-sets and peace of mind that is necessary to run an efficient, on-premise, businesscritical data centre,” concludes Andy Wren. Angel Trains, Schneider Electric,

Best practice labelling Professional network infrastructure label printer

When you choose a Brother PT-E550W label printer, you choose the best; with fast printing and wireless connectivity, plus in-built symbols and templates, you’re guaranteed to be more productive on-site.

Integrates with:

A wide range of durable, laminated tape colours and widths make it perfect for a range of uses including cable identification, patch panels, outlet points and warning signage. Via iLink&Label app:

Order today from your preferred Brother Supplier

Big Data & Internet of things

Treat the problem Leo Craig, general manager of Riello UPS, outlines how a dose of data could be the perfect prescription to help the NHS tackle the country’s future healthcare needs.


he National Health Service celebrates its 70th birthday this summer. But the muchloved institution, and the wider healthcare sector, are facing unprecedented pressures. One problem faced is a growing population where people live for longer, often with chronic conditions that require ongoing expensive treatment. Resources – both financial and the personnel needed to deliver care – are stretched. More than £120 billion is spent on the NHS every year, a budget that will top £123 billion by 2020. But our health service also treats a million patients every 36 hours, and around 18 million of us live with long-standing diseases. The only way the country can cope with these escalating demands is to embrace the

20 | June 2018

increased use of data, automation, and artificial intelligence (AI) to transform healthcare delivery. As a cradle-to-grave service, the NHS already has a valuable collection of data. Millions of people are now using wearable fitness trackers recording a wealth of information; add in the details captured from a whole host of other sensors or apps and the possibilities are enormous. Back in May, Prime Minister Theresa May pledged funding worth millions of pounds to develop AI that she believes will prevent 22,000 deaths a year from cancer by 2033. She stated, “The development of smart technologies to analyse great quantities of data quickly and with a higher degree of accuracy than is possible by human beings, gives us a new weapon in our armoury in the fight against disease.”

Of course, this emphasis on data requires additional storage and processing capacity, and healthcare already poses unique challenges for data centre managers. In an environment where a ‘mistake’ can mean the difference between life and death, security, safety, and ethical concerns need to be at the forefront of the data centre industry’s mind.

The (robot) doctor will see you Robotics in healthcare isn’t a new concept. The da Vinci Surgical System has been used in hospitals worldwide for years and assists in 200,000-plus operations a year. But the power of data and machine learning is now making its mark across all aspects of medicine.

Big Data & Internet of things

Wearables and connected devices help with disease management for patients with chronic conditions. Sensors enable enhanced monitoring and trigger reminders for patients to take their medication. Virtual doctor consultations or AI-influenced chatbot apps that diagnose nonemergency cases are becoming ever more commonplace. Artificial intelligence is even diagnosing heart disease more accurately. When cardiologists study echocardiogram scans to detect irregular heartbeats, there’s a 20% margin of error. This means 12,000 patients out of 60,000 a year either undergo unnecessary surgery or are sent home with the all-clear even though they’re at risk. Trialled by a team at John Radcliffe Hospital in Oxford, Ultromics AI records 80,000 data points from a single echocardiogram, increasing diagnosis accuracy to above 90%. Away from the clinical side of healthcare, automation could slash administration. According to the British Medical Association, a trainee doctor spends nearly a fifth of their time on admin. Non-essential paperwork takes up a similar proportion of a nurse’s workload. ‘Virtual assistants’ have enormous potential to take on tasks such as appointment booking or composing patient letters.

Medical data centre design: Power plus resilience A reliable uninterruptible power supply (UPS) is already an essential part of healthcare IT infrastructure. A continuous, clean electrical supply is crucial in critical environments such as an operating theatre or pharmaceutical research lab, where even minuscule power fluctuations can lead to disaster. Added emphasis on data will place even greater demands on

storage capacity and resilience. A data centre operating in the mission-critical medical sector should meet at least tier three standard, providing N+1 redundancy so that each component needed to support the IT processing environment can be shut down or maintained without the entire system going into shutdown. Any UPS system should also be configured to N+1 redundancy too to deliver similarly dependable power protection. The rise in popularity of modular UPS undoubtedly helps healthcare data centres provide performance, without compromising on redundancy or efficiency. Modular units closely match a facility’s power requirements while offering ease of scalability – if capacity needs to grow, simply add extra modules. Modular UPS also take up less space and require less energy, and because each individual module is ‘hot swappable’, power protection isn’t compromised during maintenance or if a component fails. Earlier this year, NHS Digital advised that the public cloud was – subject to security caveats – a safe place for care providers to store confidential patient data. So to meet the increased in demand, it’s highly likely a mixture of cloud and on-site data centre storage will be the solution. One major hurdle to overcome is the NHS’s notoriously dated ICT infrastructure, as evidenced by last year’s WannaCry ransomware attack which crippled a third of health trusts. NHS Digital is also replacing its existing wide area network (N3) with a new Health and Social Care Network (HSCN). By next March, the centralised system will be replaced by a dispersed ‘network of networks’ that encourages health bodies across England to freely share information. Any data centre

operator interested in exploiting the medical market will need to ensure they are HSCN compliant.

The latest diagnosis

“Added emphasis on data will place even greater demands on storage capacity and resilience.”

NHS England plans to invest more of its £120 billion-plus budget into data-driven solutions. Last autumn, the organisation’s outgoing national medical director professor, Sir Bruce Keogh, even declared that “…in certain circumstances, AI is better than doctors at diagnosing certain conditions.” Another leading figure, chief executive of NHS England Simon Stevens, is similarly supportive. He argues data will lead to major advances in radiology, pathology, and dermatology. Data centre managers have a crucial role to play in this looming medical revolution. They need to design systems that provide the necessary redundancy, security, network compliance, and critical power protection required for such a sensitive sector. And as healthcare becomes more reliant on robotic equipment, the same principles must also apply there too. Robots assisting in life-or-death surgery need to be supported by suitably robust power protection. A single UPS unit won’t 100% guarantee continuous, clean electrical supply. Hospitals must install UPS systems with at least N+1 redundancy plus dependable backup generators. Riello UPS, June 2018 | 21

Big Data & Internet of things

You’re hired Martin Ewings, director of specialist markets at Experis, outlines six key skills employers look for in Big Data hires.


any organisations have invested in Big Data in recent years, recognising that turning raw data into valuable insights can help to set them apart from the competition. To achieve this, employers must engage and find the right professionals, who are suitably capable of turning this data into business gold. It’s therefore hardly surprising that Big Data professionals are highly sought after. In fact, new research shows that the demand for Big Data skills and

22 | June 2018

professionals has surged by 78% in the past 12 months (Q1 2017 – Q1 2018). Big Data embraces a broad range of activities. But it’s important to remember that it’s more than just ‘a lot of data’. It’s about the meaning and value that can be extracted from the data. And in order to turn this data into actionable insight, businesses need to find the talent with the right skills. Here are six essential skills that employers look for when making Big Data hires:

• Programming A key foundation for Big Data is learning programming languages. Big Data involves handling large volumes of unstructured data which come from various sources. Having the ability to code, analyse and customise large datasets to draw relevant insights is a crucial component. Typical programming languages which are in demand for Big Data include: R, Matlab and SAS. It’s also important to learn some of the basic languages, such as C, C++, Python and Java.

Big Data & Internet of things

While it’s not crucial to learn every programming language, starting with the basics will be useful to help Big Data professionals pick up other languages easily.

• Data analysis and interpretation Being able to decipher data patterns and sequences and draw conclusions from them is a vital skill to have to remain in demand and competitive. Big Data experts are required to give their organisation valuable insight about customers, as well as their own business performance, helping them to gain a competitive advantage.

• Creativity and problem solving The need for solving problems will vary depending on what Big Data experts are working on. Whether it’s related to the systems, data or processes, being able to identify an issue, think outside the box, find a possible solution and implement it is a necessity. When so much of the business world relies on technology and information, any issue could have a significant impact on the organisation. For example, with the need to drive significant change to ensure compliance with the GDPR, Big Data professionals must deal with handling large customer datasets accordingly to avoid any fines – set at 4% of annual turnover or €20m, whichever is greater – and costly reputational damage.

opportunities, communicate them to the relevant stakeholders and influence key decisions, ensuring the business is heading in the right direction to achieve their objectives.

• Understand multiple technologies Big Data means different things to different organisations. Many of them would like to make use of Big Data and want to have the best qualified talent with the latest skills. However, the technologies that Big Data professionals will be working on will vary depending on the organisation. Therefore, the broader the individual’s general Big Data knowledge, the easier it is to adapt to match with a company’s requirements.

• Business acumen

• Continuous learning

Being a Big Data professional isn’t just about data and technology. Getting under the skin of the business and understanding its goals, objectives and processes is also important. This will enable Big Data professionals to identify

Finally, no matter how much a professional has mastered an organisation’s Big Data skills and knowledge, to remain employable for the long-term, it is less about what they already know and more about their learnability. As

“Being able to decipher data patterns and sequences and draw conclusions from them is a vital skill to have to remain in demand and competitive.”

technology and Big Data tools and requirements continue to advance at pace, it’s essential for employees to continue learning and developing their skills to remain flexible and adaptable to any future changes and stay ahead in their career. In this data-driven world, organisations now need Big Data skills and professionals that can turn information into insight. And while demand for Big Data skills remains high, employees must ensure they are continually evolving their skillset to stay one step ahead of the competition and position themselves as an essential Big Data hire. Experis, June 2018 | 23

Big Data & Internet of things

The big bang As artificial intelligence explodes onto the business scene, James Petter, VP, EMEA at Pure Storage discusses how to build a data centric architecture for successful AI adoption.


he fourth industrial revolution, powered by AI and machine learning (ML), is expected to transform our society, and it’s a revolution that is well underway. AI is already having an incredible impact on some industries, enabling everything from smarter healthcare and better genomics testing, through to greater understanding of crop disease in farming and improved inner city traffic management. AI isn’t a new technology. AI has been around since the 1950s, but until very recently it was

24 | June 2018

restricted to academic projects and a small handful of the world’s biggest organisations. In fact, many smartphones and other technologies that consumers take for granted use AI. Voice assistants and predictive text are used on a daily basis by many people. However, it’s taken until now for technology to advance to the point when AI is feasible for all businesses to adopt. This new ‘big bang’ of AI adoption is fuelled by a perfect storm of three key technologies: deep learning (DL) software, graphics processing units (GPUs) and big data.

Laying the foundations Inspired by the human brain, DL uses massively parallel neural networks – effectively writing its own software by learning from a vast number of examples. DL technology has already proved to be highly useful in fields where data is less numerical and requires more of a cognitive approach. Tasks such as speech and audio recognition, language processing or visual understanding likely wouldn’t have progressed as quickly using standard ML techniques.

Big Data & Internet of things

GPUs are the second technology behind the AI uptake. Modern GPUs, with thousands of cores, are well-suited to running algorithms that loosely represent the human brain. Using the right GPU means data scientists and academics are able to run increasingly complex and detailed AI projects. Both DL and GPUs are major breakthroughs and game changing technologies, and when applied to the third piece of the puzzle, big data, the potential for innovation is incredible. However, while DL and GPUs are progressing, many storage technologies have lagged behind. Consequently, there has been a performance gap between the compute element (DL and GPUs) and the storage, limiting the extent to which companies can capitalise on data which has been growing at an exponential rate.

Unlocking the potential of data through infrastructure innovation It’s critical for organisations to invest in technologies that are equipped to handle this explosive growth of data in recent years. As the size of data sets has increased, moving and replicating data has become a prohibitive expense, and a bottleneck for innovation. A new model was needed – enter the data centric architecture. A data centric architecture is a modern design that puts data at the core of an organisation’s infrastructure. This eliminates the need for data to be moved between old and new systems, and keeps business data and applications in place while the technology is built around it. The aim is to bring the compute element to the data, as opposed to the other way around. This means that organisations can spend less time and expense moving data,

“Both DL and GPUs are major breakthroughs and game changing technologies, and when applied to the third piece of the puzzle, big data, the potential for innovation is incredible.”

and more time innovating and making use of data sets. To really benefit from a data centric architecture, the system needs to work in real-time, providing the performance needed for the next generation analytics that make AI so powerful. It also needs to be available on-demand and be self-driving, i.e. does not require constant management, thus enabling the IT operation to act as a storage service provider for the organisation. Consolidating and simplifying this through flash, makes it far easier for teams to then support the technology that is fuelling tomorrow’s growth.

Leveraging big data, GPU and DL through storage By optimising the compute and storage pairing in this way, organisations can define a deployment reference architecture that provides the GPUs with the ideal storage infrastructure, one that combines the speed of locally attached storage with the simplicity, capacity and consolidation of shared storage. Organisations such as Paige.AI and Global Response are already using this optimised approach in compute and storage to support their AI projects. Paige.AI is an organisation focused on revolutionising clinical diagnosis and treatment in oncology through the use of AI. Pathology is the cornerstone of most cancer diagnoses. Yet most pathologic diagnoses rely on manual, subjective processes developed more than a century ago. By leveraging the potential of AI, Paige.AI aims to transform the pathology and diagnostics industry from highly qualitative to a more rigorous, quantitative discipline. With access to one of the world’s largest tumour pathology archives, the organisation

needed the most advanced DL infrastructure available to quickly turn massive amounts of data into clinically-validated AI applications. For organisations like Global Response, AI represents an opportunity to reinvent existing business models. Global Response has begun development on a state-of-the-art call centre system that allows for the realtime transcription and analysis of customer support calls. This will allow for superior customer experiences and faster solutions – both increasingly important as consumer expectations shift heavily toward personalised experiences. Global Response had reached an inflection point where integration of AI throughout the organisation was critical to the ongoing success of the business. By using a solution that integrates state-of-the-art software and hardware it enabled the Global Response teams to get up and running in hours, not weeks or months.

Putting data at the centre of your operation AI and DL are bringing what’s possible with analytics to the next level, and it’s impacting every industry. In fact, by 2020, Gartner predicts that AI will be pervasive in almost every software-driven product and service available. For this to be a reality, organisations need to make sure that data is at the core of their IT approach. Without the adoption of a data centric architecture, organisations can still try to utilise the compute power that DL and GPUs offer, but with little effect. Truly successful AI depends on this perfect partnership of compute power and storage. Without it, the full potential of data won’t be realised. Pure Storage, June 2018 | 25

Big Data & Internet of things

Lucky eight Joe Nagy, global director, product marketing and strategy at Software AG, examines eight ways to make the most of the internet of things.


hen it comes to IoT, businesses need to ensure they are doing more than shaking a magic eight ball. If you really want to use IoT to help drive your businesses innovation, there are eight key aspects that should be considered.

26 | June 2018


Future proof

In order to make the most out of IoT, it is important to be sure that your businesses IoT platform can connect with any device, sensor or machine over any network. It must also provide flexible deployment choices with SaaS, PaaS, on premises, hybrid or edge.

In addition, it should also be low-code and future-ready with solutions for enterprise integration, API management, predictive analytics and machine learning – as well as business process and portfolio management.

Big Data & Internet of things

“In order to make the most out of IoT, it is important to be sure that your businesses IoT platform can connect with any device, sensor or machine over any network.”


Support instant IoT with rapid start

Businesses should be able to register and connect devices in minutes, with an easy-to-use control panel to set up rules, connect to key apps and the ability to realise the benefits of IoT immediately. It also needs to be easy to create a free trial account in the cloud at any time.


The Edge

Businesses seeking to make the most out of IoT solutions also need a distributed architecture that is elastic enough to swell and shrink dynamically. It should be able to run in data centres, and on ‘thick’ or ‘thin’ edge devices and should have intelligence for running advanced analytics/automating key responses at the edge. The IoT solution should also be able to manage and modify edge apps, models and analytics on the fly. This is so you are able to build and test apps centrally and then push them out to where they are needed at the touch of a button.


An independent and open platform

In order to avoid vendor lock in, and to future proof your platform, it will need to be independent and open enough to work with what you have today and, what you may have tomorrow.


Make it rebrandable

You may want to take a complete IoT offering to market for your customers and quickly develop apps from pre-packaged solutions. A platform that can easily roll out multi-tenant or separate private instances, on premises or in the cloud is important.


Make use of free  methodology and design-time tooling

Businesses need proven tools to help to model, plan and manage IT and OT assets, plus a resource library supporting work packages, processes and practices.


Look to the market-leaders

For peace of mind, businesses should look to a platform that is not only recognised as a leader by top analysts, but also by a wide array of satisfied customers.


Work with partners

Your platform provider should have a huge ecosystem of strategic partners across SIs, VARs, hardware and technology providers. These partners can help bridge gaps and build and refine any custom or specific vertical requirements you have for your platform. IoT has become more than a ‘nice-to-have’ or niche solution in today’s market. It now a vital part of every organisations pathway to innovation, future proofing and growth – it is central to remaining competitive. To make the very most out of the opportunity that IoT presents, it’s important that businesses take the time to explore their options first. By identifying the best option for you, it becomes possible to tick and cross all the necessary boxes. Software AG, June 2018 | 27

Big Data & Internet of things

Back to basics Jason Kay, CCO at IMS Evolve, specialist in scalable IoT and machine management solutions, discusses how businesses can best utilise digitisation, IoT and automated outcomes to improve multiple core business objectives.


he application of IoT is booming with new use cases arising near enough daily. But, contrary to its growth, the sector risks inertia if businesses lose sight of the key objectives digitisation was founded upon – improving day-today experiences. Yes, a big part of IoT is creating more efficient processes. But those efficiencies must translate into issues that resonate with customers, from the quality of the product, to meeting environmental pledges and reducing wastage to truly deliver; something that can’t be achieved by automating processes alone, but by automating outcomes.

Digitisation falters Pinpointing the reason for organisations’ growing failure to make the expected progress towards successful digitisation is a challenge. Choice fatigue, given the diversity of innovative technologies? Over ambitious projects? An insistence by some IT vendors that digitisation demands high cost, high risk rip and replace strategies? In many ways, each of these issues is playing a role; but they are the symptoms not the cause. The underpinning reason for the stuttering progress towards effective digitisation, is that the outcomes being pursued are simply not aligned with the core purposes of the business. Siloed, vertically focused digitisation developments typically focus on short-term efficiency and 28 | June 2018

process improvements. They are often isolated, which means as and when challenges arise, it is a simple management decision to call time on the development: why persist with a digitisation project that promised a marginal gain in process efficiency at best, if it fails to address core business outcomes such as customer experience? Accelerating the digitisation of an organisation requires a different approach and brave new thinking. While disruptive projects and strategies can prove threatening to existing business models – when executed correctly – can in fact create opportunity for new business models, exploration and a new approach to the market. By considering and focusing on the core aspects of the business, not only can opportunities to drive down cost be identified, but also deliver measurable value in-line with clearly defined outcomes.

Reconsidering digitisation In many ways, the IT industry is complicit in this situation: on one hand offering the temptation of cutting-edge and compelling new technology, from robots to augmented reality, and on the other insisting that digitisation requires multi-million pound investments, complete technology overhaul and massive disruption to day-to-day business. It is therefore obviously challenging for organisations to create viable, deliverable longterm digitisation strategies; and

this confusion will continue if organisations focus on the novelty element and fail to move away from single, process-led goals. Achieving the true potential digitisation offers will demand cross-organisational rigour that focuses on the business’ primary objectives. Without this rigour and outcome-led focus, organisations will not only persist in pointless digitisation projects that fail to add up to a consistent strategy but, concerningly, will also miss the opportunity to leverage existing infrastructure to drive considerable value. Consider the impact of an IoT layer, deployed across refrigeration assets throughout the supply chain to monitor and manage temperature. A process-based approach would be focused on improving efficiency and the project may look to utilise rapid access to refrigeration monitors and controls, in tandem with energy tariffs, to reduce energy consumption and cost. However, if such a project is only defined by this single, energy reduction goal, once the initial cost benefits have been achieved, there is a risk that the lack of ongoing benefits will resonate with management. Yet digitisation of the cold chain also has a fundamental impact on multiple corporate outcomes, from customer experience to increasing basket size and reducing wastage; it is – or should be – about far more than incremental energy cost reduction.

Big Data & Internet of things

Supporting multiple business outcomes Incorrect cooling can have a devastating impact on food quality. From watery yoghurt to sliced meat packages containing pools of moisture and browning bagged salad, the result is hardly an engaging brand experience. These off-putting appearances can threaten not only customer perception but also basket size, yet the acceptance of this inefficiency is evident in the excessive supply chain over-compensation. To ensure that the products presented to customers on the shelves are aesthetically appealing, retailers globally rely on overstocking with a view to disposing any poorly presented items. The result is unnecessary overproduction by producers and a considerable contribution to the billions of pounds of food wasted every year throughout the supply chain. Where does this supply chain strategy leave the brand equity with regards to energy consumption, environmental commitment and minimising waste? Or, for that matter, the key outcomes of improving customer experience, increasing sales and reducing stock? It is by considering the digitisation of the cold chain with an outcomes based approach, a project that embraces not only energy cost reduction but also customer experience, food quality, minimising wastage and supporting the environment, that organisations are able to grasp the full significance, relevance and corporate value. Furthermore, this is a development that builds on an existing and standard component of the legacy infrastructure. It is a

project that can overlay digitisation to drive value from an essentially dull aspect of core retail processes, and one that can deliver return on investment, whilst also improving the customer experience.

Reinvigorating digitisation strategies If digitisation is to evolve from point deployments of mixed success, towards an enduring, strategic realisation, two essential changes are required. Firstly, organisations need to consider what can be done with the existing infrastructure to drive value. How, for example, can digitisation be overlaid onto existing control systems to optimise, for example, the way car park lights are turned on and off, to better meet environmental brand equity and reduce costs? In the face of bright, shiny disruptive technologies, it is too easy to overlook this essential aspect of digitisation: the chance to breathe new life and value into existing infrastructure.

“In the face of bright, shiny disruptive technologies, it is too easy to overlook the chance to breathe new life and value into existing infrastructure.�

Secondly, companies need to determine how to align digitisation possibilities not with single process goals, but with broad business outcomes – from a better understanding of macroeconomic impacts, all the way back through the supply chain to the farmer to battle the global food crisis, to assessing the impact on the customer experience. And that requires collaboration across the organisation. By involving multiple stakeholders and teams, from energy efficiency and customer experience to waste management, a business not only gains a far stronger business case but a far broader commitment to realising the project. Combining engaged, crossfunctional teams with an emphasis on leveraging legacy infrastructure offers multiple business wins. It enables significant and rapid change without disruption; in many cases digitisation can be added to existing systems and rapidly deployed at a fraction of the cost proposed by rip and replace alternatives. Using proven technologies drives down the risk and increases the chances of delivering quick return on investment, releasing money that can be reinvested in further digital strategies. Critically, with an outcomeled approach, digitisation gains the corporate credibility required to further boost investment and create a robust, consistent and sustainable crossbusiness strategy. IMS Evolve,

June 2018 | 29

Smart cities

Get smart The rise of smart cities like London is leading to a new era of city design, as well as a new era in data centres, says Greg McCulloch, CEO, Aegis Data. But how exactly will they need to change?


ondon has been named as one of the world’s top smart cities. However, one of the barriers to achieving the full potential of smart cities, as highlighted in the report by Philips Lighting and SmartCitiesWorld, is a lack of budget and infrastructure. A true smart city requires all of its systems - from transport, intelligent CCTV and utilities, through to houses and hospitals, and the smart devices that we carry around in our pockets to be fully integrated.

30 | June 2018

For example, imagine the huge amount of data collected from connected vehicles, both privately-owned or public (car sharing and public transport), and other smart devices such as phones, through to sensors on lamp posts, which can be analysed and turned into useful and actionable insights which can ultimately make our lives better, whether this is reduced pollution or easier and faster journeys to work.

But to analyse this data efficiently requires far greater volumes of processing power than we have seen in cities before. Much of this data won’t need to be analysed in real-time, but it will require huge amounts of computing power to crunch terabytes of data around driving behaviour, traffic patterns and so on, and then test multiple computer models to predict things like new and better road layouts, public transport routes or other transport options, for example.

Smart cities

Moving towards high performance computing ‘Traditional’ data centres, which exist in metropolitan areas, were not originally built to address such volumes of data crunching. Their standard architecture is able to deal with lots of small problems in series, such as serving a web page or processing an e-commerce transaction. Even then, unexpected spikes in traffic can require more computing power than available, causing websites to crash, with all the customer backlash and reputational impact that brings. The challenge, therefore, is to make sure we have the data

centre infrastructure in place to support the rise of large scale data analytics across smart city systems, in a cost effective manner. This is naturally going to lead to a rise in the demand for more cost effective high-performance computing (HPC) ready data centres, which until recently, have been focused on serving powerful computing requirements from academia, researchers and governments. High-performance computing aggregates computing power in a way not typically associated with standard server infrastructure. It requires denser banks of computer resource, which reduce minute but critical periods of latency between servers during intense parallel processing. This is critical for the analysis of the huge volumes of data generated by smart cities, to allow us to run multiple queries on this data and in turn derive actionable insights. In a data centre, optimising for this will happen by redesigning server racks; for example, removing the need to have cooling slots between processing units. This is, of course, provided that the data centre in question can provide the power and alternate cooling systems needed to support higher contiguous rack stacking, while limiting the cost for the end user. These denser racks also reduce floorspace, meaning more computing power in a smaller footprint, at a lower cost.

A change of location; away from metropolitan areas Data centres located in smart cities like London will struggle to achieve this without passing on huge costs to their customers, due to the high cost of real estate, power and cooling. The density of power requirement needed for high-performance computing was never anticipated when London’s power grid was designed, and the urban density is already so high it’s challenging to get more power in safely, much less

“We need to have the data centre infrastructure in place to support the rise of large scale data analytics across smart city systems, in a cost effective manner.”

at a sensible cost. PUE ratios, used to measure the cost of compute resources, can be as high as two times in London vs 1.2-1.3 only a few miles outside the centre. This is why we are starting to see an increase in high-performance computing data centre facilities located beyond the central areas of major cities. The availability of direct cooling methods (taking the outside ambient air to cool the data centre) and less expensive power bills, coupled with lower real estate costs, enable end-users across governments and the private sector to benefit from more powerful computing at a lower cost.

Future-proofing data centres today for the smart cities of tomorrow There is no doubt that investing in smart city infrastructure will not only drive cost savings and efficiencies, but also create new jobs and revenue opportunities. A report by the engineering and design consultancy Arup estimates that London’s smart city market could reach approximately $13.4 billion by 2020 across sectors such as energy, transport, healthcare, infrastructure, governance, security and buildings. However, at a time when public sector spending is under more scrutiny than ever before, spending on the infrastructure to support smart cities needs to be as smart as the cities themselves, and it needs to consider future needs as well as the needs of today. We are a long way from seeing London and other major metropolitan areas becoming completely smart cities, but as our capabilities grow, and new technologies are developed, we need to ensure we are investing in the facilities we need which can grow with smart city requirements and handle the huge volumes of data we can expect to see. Aegis, June 2018 | 31

swarm computing

The swarm Pavel Bains, CEO of Bluzelle, discusses ‘swarming’- storage for the decentralised era.


t’s beginning to sink in that there’s a looming data crisis. We’re producing 2.5 quintillion bytes of data a day, and 80% of this is unstructured and difficult to fit into a pre-defined mold (think human-generated content). It’s more important than ever that we improve the current model. Information is increasingly becoming a commodity as businesses flock to Big Data analytics and AI to glean better insights about their customers.

32 | June 2018

Internet of Things (IoT) smart devices are constantly exchanging information between one another. It’s great to see the Age of Information underway. But it’s become readily apparent that our current methods for securely storing the information just aren’t up to par. One need only look at the news to find out about yet another data breach occurring on a seemingly daily basis. GDPR will certainly help hold businesses accountable, but it

shouldn’t be hailed as the saviour and protector of information in cyberspace. It will ensure better practice around data storage, but this doesn’t mean that hackers and malicious actors will simply give up and stop targeting company databases. We need a remedy stronger than legislation. We need a system built with security baked in, and not added as an afterthought. Centralised data silos owned by businesses and cloud providers

swarm computing

are a major attack surface, and one that, thanks to advances in technology such as blockchain and peer-to-peer file transfer protocols, can be phased out in favour of distributed storage solutions. Swarming is the polar opposite of the standard centralised model. Instead of storing a whole file on a specific server, the file would instead be divided up into fragments (also called shards) and pushed to a number of different locations. Any of these locations could be breached, with minimal risk to the original owner: the unauthorised party would only gain access to a (most likely unreadable) piece of the original file. What’s been described thus far is standard decentralised storage. Swarming builds on top of this idea with a mechanism not unlike that used in torrenting: when the user wishes to retrieve their file, the nodes closest that possess a piece will return it to the owner – due to low latency, this ensures lightningfast transfers with little strain on the network, even at peak times. It also carries with it the advantage of no downtime. Redundancy in the storage of shards means that participants are rewarded for replicating the fragments, in order to maintain access even when a node goes offline. In the case of natural disaster or power outage in a given region, the owner of a file can rest easy in the knowledge that their shards are backed up. One can see the benefits for an individual, but this is also an ideal system for business and enterprise use cases: scaling is much easier (cost increases linearly with the space used, without requiring upfront expenditures for hardware) and the integrity of the data is more secure due to the distributed nature of the swarming protocol. The foundations for a new internet (commonly referred to as Web 3.0) are being laid as we speak, with a common theme:

“It’s become readily apparent that our current methods for securely storing the information just aren’t up to par.”

decentralisation. Developers in the space are creating infrastructures and protocols free of the traditional hierarchical governance models. We’re on the cusp of a future where individuals maintain sovereignty over their own data, whilst still being able to interact with a range of services and businesses. It stands to reason that protocols in the decentralised ecosystem, should harness a decentralised backend for their storage needs. Data is the fuel that powers these networks, and should be accessible around the clock with as little latency as possible. As the IoT takes off, devices will constantly need to beam information to each other, without running the risk of being compromised. With swarming as a database solution, applications and programs can pull data from storage as securely and as speedily as possible.

Of course, these benefits are just as desirable to businesses operating on the current web (with the added bonus of preparing them for the inevitable transition to the decentralised plane). Before making such a drastic change though, it’s advisable for companies to first test decentralised storage in a sandbox environment (ensuring that everything runs smoothly). If all goes well, without any hitches, they should look into hiring a database consultant for a permanent migration. There are a number of options available to enterprises wishing to dive into decentralised storage, so it’s important that they pick that which is best suited to their needs. I doubt this marks the end for the current data centres, however. Decentralised storage still requires capacity, after all. I think we’ll see business models shift to better accommodate the practice of sharding, acting in the capacity of peers on the network. Perhaps we’ll see increased cryptocurrency adoption as a result, as blockchain-based storage mediums require incentivisation via a tokenised layer. Bluzelle, June 2018 | 33

Network security

Beat the breach Andrew Lintell, regional vice president, northern EMEA at Tufin, discusses the five network security pitfalls that could be putting your organisation at risk.


n a world where high-profile data breaches have become the norm, cybersecurity has quickly become a top priority for organisations of all sizes, in all industries. Barely a week seems to go by without news of another cyber attack hitting the headlines, prompting businesses to invest heavily in next-generation technologies in an attempt to protect their infrastructure and keep their confidential data secure. One such technology that plays a key role in securing the organisation are network security policies. These rules ensure that only the right people have the right access to the right information, putting the organisation in the

34 | June 2018

best possible position to prevent breaches from occurring. However, there are several common pitfalls that businesses can fall foul of when implementing their security policies. Here are five of the most prominent that could be leaving your business vulnerable to cyber attacks.

Not having full perspective of the network Arguably one of the biggest mistakes a company can make when configuring network security policies is to attempt to put policies in place without first gaining full visibility of the network. Today’s enterprise networks are vast and complex, and

organisations often struggle to gain full visibility. This hinders the ability to put strong policies in place. This is also the case when making necessary changes to those policies across the entire network. For example, if one policy is changed it might have the knockon effect of reducing security somewhere else. By incorporating a centralised solution that looks across the whole technology architecture, staff can manage all corporate policies through a single console and see the potential implications of policy changes before they are made. To put it another way, you can’t manage what you can’t measure – so start with visibility.

Network security

Disconnected network security policies This one may sound obvious, but having network security policies in place is self-defeating if they inhibit the business they were intended to help protect in the first place. Businesses are sensitive to the fact that they need to comply with measures to protect critical assets, but if that prevents them from using the applications essential to getting the job done, they will find ways around these policies. The solution is to provide visibility into how application connectivity is maintained in coordination with underlying network security policies. This approach ensures that the business and security teams are always in sync and aligned to the end-goal. From a management point of view, businesses need to have visibility into their application connections in order to understand the effect that could accompany any network policy changes and their impact.

Leavings holes unplugged Today’s cyber attacks are becoming more sophisticated than ever before and new variations of both known and unknown threats are being discovered at an alarming rate. For example, 18 million new malware samples were discovered in Q3 2016 alone – equal to 200,000 per day – and ransomware attacks on businesses reportedly increased three-fold between January and September 2016. This means organisations must keep their network policies up-todate by carrying out regular patches and system analysis, which requires a centralised management system that looks across the whole IT environment. Hackers are constantly on the lookout for vulnerabilities, meaning no company – irrespective of size of industry focus – can afford to leave holes unplugged.

Rigid practices

“You can’t manage what you can’t measure – so start with visibility.”

Striking the right balance between security and convenience is not an easy task, but key to ensuring policies are adhered to. Any procedures that significantly hinder an organisation’s agility or an employee’s ability to do his or her job will likely result in them being overlooked or ignored. The other danger is that staff will find a workaround, which can potentially have serious security and compliance implications. This is when ‘shadow IT’ comes into play, where employees use applications at work without the company’s knowledge or control – according to one poll, 78% of IT pros said their end-users have set up unapproved cloud services – each of which can represent a potential unmanaged risk. It is therefore essential that organisations have tools in place that allow them to easily adhere to and manage security policies. Anything that forces people to drastically change the way they work, or results in an organisation’s lack of agility, is counterproductive. Increased security interwoven with business agility is the ultimate goal.

Overlooking automation As complexity in virtually all areas of network security and compliance has increased, automation has grown to become a central component. There are now simply too many change requests to increasingly diverse networks for security teams

to keep track of manually, leading to human error and increasing the exposure of the business. The role of automation is now not only a possibility, but an essential tool for keeping pace with this degree of change and complexity. Finally, automation also has a key role to play in network security policy management and continuous compliance. Policydriven automation ensures that an organisation is compliant with internal and industry guidelines at any given point in time. However, it also means that the control plane can be adjusted at policy level and then implemented immediately across the network, further lifting the security level when required through adjustment, and delivered as a business-as-usual task. By connecting security to operations in this way, companies can vastly improve their resistance to constantly evolving threats. This is a critical point in making a tight security posture a reality all the time, rather than ‘better’ for a moment in time. IT personnel are often stretched to the limit. Network security operations can turn to policy-based automation to reduce complexity, increase visibility, and free-up resources to focus on more complex tasks to improve operational efficiencies that directly impact the bottom line of the 21st century-ready business. Tufin,

June 2018 | 35

edge computing

Over the edge? Bruce Kornfeld, CMO and SVP product management, StorMagic discusses why data centre and cloud IT solutions don’t work at the Edge.


dge computing is dominating today’s headlines because so much data creation occurs outside the data centre, and sending data to the cloud is not only too difficult, but also prohibitively expensive. Organisations trying to build IT solutions at the edge can no longer use the same approach and equipment they’ve been leveraging for primary data centres or hybrid cloud environments. Complexities and cost confinements make it nearly impossible for CIOs to justify cloud or ‘one-size-fits-all’ data centre solutions at all locations.

36 | June 2018

What is the Edge? Edge computing refers to the IT systems required to run applications and store data at locations outside the data centre and the cloud. Edge is growing so fast, because more and more applications are being run where the data is being created – whether it’s due to the Internet of Things (IoT), analytics needing to run locally, or simply the fact that too much information is being generated and sending it to the cloud or a corporate data centre just doesn’t make economical sense anymore. Processing at the edge has many use cases, from a large

enterprise with thousands of sites to companies running multiple, small data centres or even wind farms or oil rigs that need real-time processing. Typical edge computing deployments are found in remote or branch offices (ROBO), retail and department stores, restaurant chains, warehouses and factories. Two common challenges faced in all edge environments are limited physical space, and the need to be able to manage numerous locations. As a result, edge administrators have to manage data itself and the networking connections back to their primary data centre or

edge computing

offsite cloud provider. Sending all data from the edge to these locations is costly, and sometimes the problem boils down to ‘the speed of light’; data cannot get from the edge to a data centre (or cloud) and back fast enough for real-time decision-making. For example, a typical department store would not be able to stream live video surveillance to the primary data centre or cloud while simultaneously conducting mandatory, real-time monitoring. In addition, IT directors run the risk of overbuying and overprovisioning what they really need at the edge if they neglect to conduct a bit of research ahead of the buying cycle. Administrators of demanding edge environments today prefer hyperconverged solutions that combine server, networking and storage in one small, low-cost package that can be easily managed remotely.

Complete solutions for edge computing To build a successful edge environment, IT decision makers should seek physically smaller solutions that offer high availability, integrated storage and expandability to easily scale with the environment over time. However, edge sites can access everything they need to run mission-critical applications at ROBO sites, in retail outlets and SME data centres with the all encompassing solutions available in today’s market. Announced earlier this spring is one such complete solution from StorMagic and APC by Schneider Electric: ‘Branch in a Box.’ The preconfigured, ready-to-use bundle utlises key components from APC by Schneider Electric’s data centre infrastructure portfolio including its physical enclosure, UPS, PDU, cooling, environmental monitoring, security and management software.

Branch in a Box also includes redundant hyperconverged appliances based on Dell/EMC servers, VMware hypervisor and virtual SAN software from StorMagic. Branch in a Box lets organisations instantly implement a highly energy efficient IT solution, which is optimised and ready for rapid deployment on-site. Solutions built on pre-integrated architecture create a reliable and robust environment that leverages the best of on-premise and multicloud infrastructures, and can quite literally drop into the environment for fast, optimised edge computing. Comprehensive solutions like these address the need for more cost effective and resilient hyperconvergence in a wide variety of use cases.

Future outlook from the edge There are two key trends that will continue to evolve in 2018 and beyond, both influenced by healthy IT practices at primary data centres and in cloud environments. First, security integration will become mandatory, and second, IT departments will dramatically shift how they define and architect complete solutions. The edge has been left behind in the security arena, despite the fact that edge computing environments can be at particular risk for breaches and physical security threats as you won’t find security guards, video surveillance equipment or restricted badge access at edge computing sites. Additionally, the physical equipment residing at the edge is often exposed by being housed in an unsecured server room, closet or even under someone’s desk. Moving forward, edge sites will integrate data encryption en masse to mitigate the risk of data loss posed by both virtual attacks and hardware theft.

“Two common challenges faced in all edge environments are limited physical space, and the need to be able to manage numerous locations.”

StorMagic and APC by Schneider Electric ‘Branch in a Box’

Will edge computing displace the cloud? Not completely. There are plenty of use cases that are ideal for cloud. The SaaS (Software as as Service) model is a great example of a cloud use case that makes perfect sense. Several companies including Salesforce, Adobe, Amazon, Dropbox, Oracle and many others have mastered the SaaS model. However, these applications aren’t trying to process and store the local, real-time information that is becoming mission-critical to many organisations at the edge. Edge is a hot topic because data is being generated outside of the traditional data centre faster than ever before. Organisations who implement hyperconverged solutions will benefit from cost control, better performance and improved latency. Edge computing eliminates the need for massive network connections to the primary data centre and lets local applications run with extreme speed. This last point is important to note, as many applications today actually require a local connection, and edge growth will foster a new era of local applications that haven’t been invented or developed before – until now. StorMagic,

June 2018 | 37

Projects & Agreements

Landmark Information Group relies on Veeam to guarantee data uptime Veeam Software has announced that Landmark Information Group, a UK provider of environmental reports, has deployed Veeam Backup and Replication to support data centre migration and, going forward, to protect all missioncritical applications and data more efficiently and effectively. Landmark serves architects, lenders, environmental consultants, estate agents and homebuyers. Most recently, it co-launched Great Britain’s first national flood map to create predictive flood scenarios. Data and technology are the lifeblood of Landmark’s business. These are central to its team of experts responsible for delivering the intelligence and solutions that enable its customers and clients to make informed environmental planning decisions. This information is critical – especially to Landmark’s ability to serve its customers. Businesses use Landmark’s digital mapping data and environmental-risk reports for major planning, residential and commercial decisions. If data is unavailable, Landmark cannot comply with the bespoke service level agreements (SLAs) it holds with its customers. Confidence would be lost and revenue would be impacted as each SLA is defined by data uptime. Landmark initially deployed Veeam to assist with a major data centre migration project – a process that would have been nearly impossible to conduct without significant disruption to the business had it not been for Veeam, according to Graham Smith, infrastructure team lead at Landmark. Veeam, Landmark, 38 | June 2018

DATA4 partners with Nlyte to provide next generation colocation services Nlyte Software has announced that DATA4, the European specialist in the design, construction and operation of ultra-secure data centres, has partnered with Nlyte to provide real-time information on its data centre infrastructure, reduce operational risks, and provide value-added services for colocation customers. With Nlyte, DATA4 will progress its vision to be the global leader in colocation services. DATA4 manages flexible, high-performance, carrier- and cloud-neutral data centres, offering solutions from one colocation rack all the way through to purposebuilt, dedicated facilities. DATA4 Group currently operates 15 data centres in France, Italy and Luxembourg, with a total net technical area of 27,000sqm of IT rooms, and more than 39MW of IT power. To increase the transparency and value of services to DATA4’s end users, DATA4 wanted to ‘open the black box’ of the data centre and show customers a real-time view of its assets. At the same time, DATA4 will increase its own efficiencies and widen its service offering. Nlyte’s solutions will be connected to the various Building Management System (BMS) and DATA4’s other internal systems without any interruption of services, via its open architecture. DATA4’s partnership with Nlyte will provide greater value to its customers by offering them software as a service to improve efficiencies, agility and transparency. Nlyte,

Africa Data Centres opens new floor at Nairobi data centre facility Africa Data Centres, part of the pan-African telecoms group Liquid Telecom, has opened a new floor at East Africa Data Centre (EADC) in Nairobi, Kenya. The additional floor will provide 500sqm of rack space for leading cloud service providers, carriers and enterprises to host their businesscritical data, cloud based services, applications and back-end systems in Africa. The new floor is compliant with the latest security standards, with EADC being the first data centre in East Africa to receive Tier III certification from the prestigious Uptime Institute. With a total of 2,000sqm of secured space for data servers over four floors, EADC remains the region’s largest and most connected data centre facility, and is interconnected with Africa Data Centre’s other carrier-neutral facilities in South Africa and Zimbabwe. “The newly opened floor at EADC is a direct response to the huge demand that we’re receiving for colocation and hosting services in Africa. Our state-of-the-art data centre facility in Nairobi has exceeded expectation since it first opened in 2013, and we will continue to serve our expanding customer base with world-class data centre solutions,” says Dan Kwach, general manager, EADC. Africa Data Centres,

Projects & Agreements

Megaport supports hybrid network connections to Google Cloud with its global Software Defined Network

Equinix Closes Metronode acquisition to become market leader in Australia Equinix, a global interconnection and data centre company, has announced the completion of its acquisition of Metronode, a data centre provider operating facilities throughout Australia. The acquisition makes Equinix the market leader in Australia with 15 International Business Exchange (IBX) data centres nationwide. It expands the company’s operations in Sydney and Melbourne, and provides a presence in four new markets: Perth, Canberra, Adelaide and Brisbane. Digital transformation could add as much as A$45 billion (approximately US$35 billion) to Australia’s gross domestic product (GDP) by 2021,

according to new joint research from Microsoft and IDC. The expanded Platform Equinix, will provide significant opportunities for Australian organisations to continue their digital transformation, and move their IT infrastructure, applications and services closer to the digital edge in proximity to global customers and partners. The closure follows an agreement Equinix made with Ontario Teachers’ Pension Plan in December 2017, to acquire all the equity interests in Metronode group of companies in an allcash transaction for A$1.035 billion, or approximately US$804 million. Equinix,

Megaport has announced support for Google Cloud’s Partner Interconnect, a service from Google Cloud that allows customers to privately connect to Google Cloud Platform from anywhere. Google Cloud’s Partner Interconnect is a new product in the Google Cloud Interconnect family. Last September, Google announced Dedicated Interconnect, which provides higherspeed and lower-cost connectivity than VPN, and has become the go-to solution to connect on-premises data centres with the cloud. With Partner Interconnect, customers can now choose Megaport to provide connectivity from their facility to the nearest Google edge Point of Presence. In addition, they will be able to select from a variety of sub-rate interface speeds varying from 50mbs per second to 10gbs per second. “Partner Interconnect gives Google Cloud customers even more connectivity choices for hybrid environments,” says John Veizades, product manager, Google Cloud. “Together with Megaport, we are making it easier for customers to extend their on-prem infrastructure to the Google Cloud Platform.” This deeper integration between the Megaport SDN and Google Cloud, via the Megaport open Application Programming Interface (API), enables ease-of-use and smooth provisioning of network capacity to Google Virtual Private Clouds (VPC). The API reduces manual time consuming processes associated with activating, connecting, and managing services. Partner Interconnect is now available across the Megaport SDN which connects over 200+ data centres globally with multiple interconnect points already established. Megaport, June 2018 | 39

Projects & Agreements

CAST compression IP integrates with Achronix eFPGA Technology Achronix Semiconductor Corporation, a company offering field programmable gate array (FPGA)based hardware accelerator devices and embedded FPGA (eFPGA), has collaborated with CAST Incorporated, a semiconductor intellectual property company focusing on semiconductor IP for electronic system designers. CAST’s high performance lossless compression IP has been ported to support the Achronix portfolio of FPGA products, enabling efficient processing for data centre and mobile edge data transfer. CAST offers a hardware implementation of the lossless compression standard for Deflate, GZIP, and ZLIB that is compatible with software implementations used for compression or decompression. The hardware implementation provided by the ZipAccel core provides high throughput – up to 100Gbps – with very high compression performance and low-latency. Coupling this with Achronix Speedcore eFPGA technology enables a high-performance, lowpower solution facilitating moving and storing big data. With the explosion of applications employing analytics, the need to transfer increasing amounts of information through bandwidth limited communication channels is found from automotive systems to large financial institutions. The cost and power to transport data is becoming significant and compression implemented with Achronix eFPGA can minimise power and maximise the capability of the network. The combination of CAST compression IP and Speedcore eFPGA IP on a custom SoC effectively increases the achievable throughput. In addition developers can use the eFPGA to rapidly and efficiently implement data processing algorithms. Achronix, CAST, 40 | June 2018

Chatsworth Products helps anthem transform legacy data centre As the second largest managed health care company in the United States, Anthem Inc.’s primary and largest data centre was experiencing growing pains typical of many legacy data centres; ensuring newer, higherdensity equipment would have enough space, power and cooling, while keeping costs down. The company turned to Chatsworth Products (CPI) for a total service and product solution. Located in Richmond, Va., the 1,000-cabinet data centre was configured in traditional open air, hot/cold aisles and generated more than 82% of the electricity costs. The challenge was to make dramatic improvements in such a large, established data centre. “Capital One was building a modern data centre nearby. Through a networking opportunity, I was able to take a tour and saw Chatsworth products everywhere,” explains Dean Wagstaff, director of Data Centre Operation at Anthem. “They talked about CPI’s capabilities, products - particularly the cabinet and PDU - engineering, and value-added services as well. An unbiased opinion from those responsible for the site carried a lot of credibility.” The path to improvement began with a complete containment strategy at both the cabinet and aisle level, composed of CPI products that are now a standard specification for all Anthem data centres. Since implementation three years ago, Anthem has experienced significant improvements. “We will continue investing in air containment strategies, resulting in increased yearly savings, which are already outstanding. Overall, it has been a great investment.” concludes Dean. Chatsworth Products,

Tech Mahindra invests in the UK through a new R&D centre in Suffolk and new office in Salford Tech Mahindra, a provider of digital transformation, consulting and business reengineering services and solutions has announced strategic investments in the UK. A research and development centre ‘Makers Lab’ has been established to work in collaboration with its long-term client and partner, British Telecom (BT), at the Adastral Park research campus near Ipswich, home of BT Labs. Makers Lab is also a part of the growing business incubation cluster Innovation Martlesham. Tech Mahindra is also setting up a brand new office in Salford to strengthen its presence across the UK while rolling out its apprentice programme in 2018. Makers Lab will focus its research on major technologies such as Artificial Intelligence (AI), Machine Learning (ML) and quantum computing to make citizen services and experiences simpler and easier especially in the communications space. Through Makers Lab, its new office in Salford, and its apprentice programme, Tech Mahindra will engage more than 100 local people. Makers Lab is also connecting with University of Suffolk to on-board students as interns in the lab for various positions. CP Gurnani, CEO and MD of Tech Mahindra says, “Innovation is the key to survival in the digital future. The UK-India Innovation partnership draws from the best learning and skills of both developed and developing economies in shaping the digital future.” Tech Mahindra,

Projects & Agreements

The Bunker further enhances endpoint security posture with Palo Alto Networks Data centre and managed service provider, The Bunker, has selected technologies from Palo Alto Networks, the cybersecurity company, as part of its ongoing commitment to uphold the highest security standards for its customers. Following an evaluation of its endpoint security capabilities and anti-virus systems, The Bunker has implemented Palo Alto Networks’s Traps, an advanced endpoint protection product to boost its defences against both known and unknown endpoint threats.

As a result of Palo Alto Networks’s offering, The Bunker will benefit from lower resource usage through automated monitoring processes which only examine the actions of executable files, rather than blanket scanning every file on disk. From there, any malicious executables are identified, blocked and automatically analysed in a secure sandbox environment. The Bunker can also now harness faster updates to prevention controls as part of the wider solution, enabling it to deliver a more efficient and secure service to its growing customer base.

Dave Allen, vice president, Western Europe at Palo Alto Networks says, “The Bunker has a reputation for maintaining the highest industry standards when it comes to security and compliance. With the enhanced malware protection of Traps, The Bunker will be able to detect and respond to advanced cyberattacks rapidly, offering much better breach prevention assurance in any environment.” The Bunker,

MyMeds&Me partners with Proact to enhance data security MyMeds&Me, a Software as a Service (SaaS) provider of adverse event and product quality complaint solutions, has partnered with Proact. MyMeds&Me has opted for Proact’s comprehensive SIEM as a Service offering to ensure the security and privacy features of its pharamacovigilance solution Reportum are set to the highest possible standards. By selecting Proact’s service, MyMeds&Me will benefit from incident management alongside round-the-clock monitoring and alerts. The service will be provided by experienced security analysts from Proact, a firm that has 25 years’ experience of successfully delivering IT services.

SIEM as a Service is compliant with key regulators and is underpinned by market leading technology. Dashboard access will improve the visibility of events within Reportum, minimising risk as MyMeds&Me’s internal resources can spend less time investigating false alarms. Peter Javestad, acting CEO and president at Proact comments, “With access to our specialist vSOC team, which will be available to provide support 24x7, MyMeds&Me can have peace of mind that any cyber threats will be discovered at the earliest opportunity, allowing for prompt, effective responses.” Proact, Mymeds&me, June 2018 | 41

Projects & Agreements

Densify appoints DataSolutions as distribution partner for the UK and Ireland Densify has announced the appointment of DataSolutions as a distribution partner to support expansion in the UK and Ireland. Densify’s software as a service (SaaS) solution allows applications to become self-aware of their public cloud resource demands, continuously matching their needs with the best cloud supply. Densify is powered by Cloe, a ‘cloud-learning’ optimisation engine, that continuously learns the applications’ usage patterns and needs, and is aware of the major cloud suppliers’ technologies and prices 24/7. Cloe customers are now driving 40% to 80% improvement in efficiency across their cloud environments, which has lead to improved application health, increased automation, and lower cloud costs. Unlike traditional cloud tools that focus on the bill but don’t fix the underlying issue, Cloe addresses the problem at the application and infrastructure layer. In practice, Densify provides results in the first 48 hours of deployment, with Cloe recommending the best cloud technologies for any given application. The solution also offers multi-cloud support, whereby applications are provided the right resources even when simultaneously using multiple cloud vendors. Densify,

42 | June 2018

Colt Data Centre Services pledges support to Tech She Can charter Colt Data Centre Services (DCS) has pledged to support Tech She Can, an industry charter launched to encourage inclusivity and increase the number of women working in the technology industry. The partnership aims to challenge stereotypes in the technology industry and encourage more women to get involved in the sector. Demand for data centre capacity is growing, and with this comes the need for talented individuals of equal gender to join the field. According to research conducted by PwC into Women in Tech, over a quarter of female students have been put off a career in technology as it is ‘too male-dominated’, with only 3% of female respondents saying it’s their first choice. This is evident in the fact that 23% of STEM roles are held by women, and scantly 15% of STEM management roles are occupied by women. “The data centre industry is thriving, and we need new talent who can bring a variety of skills, experiences and background to support our business growth. One of the key barriers to getting more women into the STEM careers is down to the lack of information of what working in the sector entails, as well as the opportunity and encouragement for these individuals to thrive,” says Limor Brunner, vice president of HR, Colt Data Centre Services. Colt Data Centre Services,

Fisher Jones Greenwood stays connected with Six Degrees Next Generation Network Six Degrees, a UK-based cloud managed service provider, has announced that its Next Generation Network (NGN) has enabled UK-based law firm Fisher Jones Greenwood, to achieve optimum efficiency and connectivity between all its branch locations and with the cloud. The partnership will ensure that the top Legal500 law firm’s data and voice communications are reliable and efficient. As a rapidly growing firm with plans to expand to new locations, the team at Fisher Jones Greenwood had to deal with the challenge of getting its IT and telecoms up and running and, thus, completely ready for when staff were set to move in. Six Degrees now provides MPLS connectivity and SIP telephony to Fisher Jones Greenwood’s five office locations on the Six Degrees’ Next Generation Network (NGN). Six Degree’s L3VPN service addresses Fisher Jones Greenwood’s need for a multi-site solution and to keep traffic flowing 24x7x365. The L3VPN service runs across Six Degrees’ NGN which is completely resilient across the core nodes with the ability to re-route in less than 50 milliseconds. Peter Carr, partner and head of IT, at Fisher Jones Greenwood says, “Because Six Degrees manage all our connectivity for us, provisioning new services and office locations is much simpler. Six Degree’s end-to-end approach manages the sales, provisioning and circuit handover in a highly professional manner.” Six Degrees,

Projects & Agreements

Brenderup Group makes big move to the cloud with Interoute Interoute, a global cloud and network provider, has been selected by Brenderup Group, a manufacturer of trailers and load carrier systems, to host the company’s server environment. All servers were migrated in one project to the Interoute Virtual Data Centre (VDC) cloud platform. “Interoute VDC combined with Interoute Edge SD-WAN provides a futureproof solution that delivers improved capacity and low latency, and at the same time we don’t have to handle multiple providers,” says Peter Nilsson, supply chain manager at Brenderup Group.

“The solution enables us to focus on our core business and facilitates our organic and inorganic growth in our current as well as new geographic markets and distribution channels. Interoute has proven to us before that it understands our needs and we look forward to keep developing our partnership.” Interoute has for several years delivered an MPLS network to Brenderup Group, whose operations cover six countries in Europe. Now, Brenderup Group has chosen to broaden the partnership with Interoute to support the

digital transformation of the company’s ICT infrastructure. The new solution combines Interoute VDC with Interoute Edge SD-WAN, a software-defined WAN service that reduces costs and improves the performance of cloud applications, by managing network traffic in an intelligent manner. It enables Brenderup Group to streamline product development and more easily implement future acquisitions. Brenderup Group, Interoute,

RingCentral expands in Asia Pacific with new office in Australia RingCentral Australia, a provider of enterprise cloud communications and collaboration solutions and a wholly owned subsidiary of RingCentral, has announced its expansion in Asia Pacific (APAC) with a new office in Australia. RingCentral is building on its APAC expansion with new leadership, channel partnerships, and local product offering. RingCentral has appointed Ben Swanson as director of channel sales for APAC. Ben has over 20 years of experience leading channel sales efforts

across Asia, Africa, Australia, and New Zealand for companies including Avaya and ShoreTel. Over 80% of IT purchasing in the region is through channel partners. RingCentral is working with strategic partners like Dialog Information Technology, a Google Premier partner, to bring businesses RingCentral cloud solutions. RingCentral provides a powerful communications and collaboration cloud solution for multinational companies to support globally distributed offices and

mobile employees. Unlike expensive legacy on-premises systems, RingCentral Office is a single solution that can be quickly and rapidly deployed, as well as centrally managed. The RingCentral Office offering in Australia is delivered from service centres in Australia. Local customers also benefit from direct peering with tier one local operators across the region. In addition, customers and partners receive 24/7 technical support. RingCentral, June 2018 | 43


Pulse Secure virtual and cloud appliances expand secure access to services in hybrid IT environments Pulse Secure has announced new cloud and virtual appliances to protect access and support applications in hybrid IT environments. Enterprises are quickly moving to deploy hybrid IT, leveraging the cloud to introduce new user services and gain disaster recovery resiliency, as well as continuing to use the data centre when they must have total control of the application. The latest cloud-based Pulse Secure Appliance (PSA) allows enterprises to adapt access security frameworks to address changing application environments that blend data centre, Infrastructure-as-a-Service (IaaS) and Software-as-aService (SaaS) offerings. Frost & Sullivan notes that, “With enterprises moving away from data centre build-outs and building their infrastructure needs on public cloud, Infrastructure-as-a-Service (IaaS) becomes the strongest and fastest growing segment. It will soon surpass SaaS to become the second largest segment in the cloud industry.” Pulse Secure helps enterprises accelerate this transformation. By using an adaptable access security

framework, enterprises can confidently move applications to the cloud and leverage IaaS and SaaS to reduce operational costs and expedite new end-user services. In times of crisis or disaster, enterprises have a simpler and faster way to normalise user productivity without changing user behaviour. Prakash Mana, vice president of product management at Pulse Secure says, “Increasingly, business-critical applications are moving to the cloud and our customers worry over security. Ideally, they would like to extend their existing security policies into a single security standard that uniformly protects both the data centre and cloud. Our cloud-based PSA helps them do exactly that. Now they can simply use their trusted Pulse Secure solution to protect enterprise information no matter where it is stored or how it is accessed.” Pulse Secure,

Rittal: A new enclosure platform for the energy world At the recent ‘The smarter E’ trade fair, Rittal unveiled its new VX25 large enclosure system. The VX25 is the first enclosure system designed specifically to boost productivity in control and switchgear manufacturing and Industry 4.0 value chains. The VX25 offers the highest possible quality and consistency of data, reduced complexity and savings in time, as well as safe assembly. More than 25 registered property rights demonstrate the high level of innovation involved. At the trade fair, Rittal demonstrated that its products can be used in crosssector energy technologies with practical application solutions for photovoltaics (PV), energy storage and for e-mobility infrastructures. Photovoltaics: outdoor enclosures for extreme applications Naturally enough, photovoltaic systems are installed outdoors. The demands

44 | June 2018

made on the components installed – including outdoor enclosures – are correspondingly high. With its outdoor enclosure system, Rittal presented two variants for the installation of central converters: one variant for extreme environmental conditions (stainless steel, aluminium, double-walled, corrosivity category C4-H/C5-M in accordance with DIN EN ISO 12944) and one variant for moderate ambient conditions (sheet steel, singlewalled and corrosivity category C3). Energy storage: infrastructure right to the container Infrastructure solutions for energy storage systems can also be implemented with Rittal components. The range of standardised, coordinated and globally available approved components extends from individual enclosure systems to complete container solutions. As a result, system integrators and plant constructors

are provided with solutions for mechanical, power distribution and air conditioning solutions from a single source and thus only need to integrate their battery modules. Rittal also presented a battery enclosure from Tesvolt to demonstrate its expertise in the growing energy storage market. This is based on a fully outdoor-capable enclosure for extreme environments, including outdoor heat exchangers. Possible areas of application include the provision of buffer functions for e-mobility charging infrastructures. With two further solutions, Rittal showed how battery storage enclosures based on the VX25 can be expanded, both with heavyduty shelves (the ‘Commeo’ customer application) and with 19in systems. Rittal,


Western Digital drives capacity and improves TCO for cloud and enterprise data centres Enabling new lower levels of total cost of ownership (TCO) for cloud and enterprise customers, Western Digital Corporation has introduced the Ultrastar DC HC530 hard drive – at 14TB, no other CMR (conventional magnetic recording) hard drive in the industry offers a higher capacity. The breadth and depth of big data is driving the universal need for higher capacities across a broad spectrum of applications and workloads. Built on Western Digital’s fifth-generation HelioSeal technology, the Ultrastar DC HC530 drive is designed for public and private cloud environments where storage density, watt/TB and $/TB are critical parameters for creating the most cost efficient infrastructure. The data explosion caused by big data, IoT, artificial intelligence, machine learning, rich content and fast data applications is challenging hyperscale cloud data centres

and enterprises to efficiently build massive petabyte-scale infrastructures. This ability to cost effectively scale-up or scaleout is business critical, not only for cloud service providers but for organisations leveraging big data analytics and machine learning in medical, science, agriculture and other fields seeking innovation, discoveries and unique insights, as well as for creating new business models. A follow-on to the industry’s first 14TB SMR (shingled magnetic recording) drive, the Ultrastar DC HC530 is a 14TB CMR drive that delivers drop-in simplicity for random write workloads in enterprise and cloud data centres. Since 2014, the company’s unique, patented HelioSeal process seals helium in the drive to provide unbeatable capacity, exceptional power efficiency and long-term data centre reliability. Its low-power design does not compromise performance, while contributing to its overall TCO advantages. Both SAS and SATA interfaces will be available. Western Digital Corporation,

Hirschmann delivers guaranteed bandwidth with power transport nodes Belden has released a new MPLS-TP based backbone switch family with best-in-class provisioning software – the Hirschmann Dragon PTN with HiProvision. This new backbone network device reliably and efficiently transports mission-critical data and guarantees bandwidth in wide area networks through its support of MPLSTP technology. “Ethernet-based technologies are simple, interoperable, predictable and cost efficient compared to complicated and expensive legacy solutions such as SDH/SONET, making MPLS-TP a preferred choice for largescale networks,” says Vinod Rana, product manager at Belden. “The new Dragon PTN with HiProvision enables customers to predict the behaviour of data as it goes across the network, guaranteeing bandwidth and ensuring uninterrupted communication.”

Guaranteeing bandwidth with the same deterministic behaviour as SDH/SONET and eliminating their disadvantages for packet based communication, MPLS-TP technology equips network architects to design efficient and cost effective networks and provide a migration path for legacy equipment. With Dragon PTN and HiProvision, engineers get a fully integrated Ethernet-based backbone transmission system that allows them to: • Seamlessly integrate with legacy systems through multiple modular, redundant interface options and port types. • Secure a complete, integrated solution for provisioning and managing large networks from one vendor, making upgrades and implementation efficient and streamlined. • Easily configure and manage network complexity with HiProvision network provisioning software.

• Create redundant networks integrating best-of-breed technologies from wide area networking and Industrial Ethernet networking. Dragon PTN with HiProvision is best suited for transportation applications that rely on mission-critical data transfer, including mass transit systems, railways and metro stations. The backbone device is also ideal for other harsh industrial environments, such as power transmission and distribution and oil and gas applications. Hirschmann,

June 2018 | 45

final thought

Trendy Clive Partridge, product manager IT Infrastructure at Rittal, discusses data centre trends and what really matters to IT managers.


igital transformation is in full swing, to the extent that at least half the global value chain could be digitised by 2021, according to a forecast from IDC market researchers. Faced with high electricity costs, it is becoming increasingly important for companies to modernise the IT landscape and make their data centre operations more efficient. The following review of managed cloud services, edge computing and direct current in the data centre, analyses which are the most suitable technologies to make ongoing operations both cost effective and future-proof.

46 | June 2018

The multi-cloud trend Hybrid multi-cloud environments will dominate future IT agenda. According to IDC, more than 90% of companies could be using multicloud platforms by 2021. There are many reasons why. For one thing, there is no one-stop cloud provider that can meet all the requirements; complete cloud stacks always come from multiple providers. Moreover, performance, latencies, compliance and risk management often have to be implemented individually, sometimes with different cloud providers. Typical cloud services include infrastructure services

(IaaS), applications (SaaS), and development platforms (PaaS). Those who believe the sector is becoming too complex can rely on external providers to deliver managed cloud services.  Cloud systems in data centres can be operated in a completely fail-safe way and maintained by an IT service provider, while users can easily access its resources via their web browser or a desktop application. To support this shift in service provision, Rittal and its partners will increasingly be offering turnkey data centres including cloud platforms and managed services for fail-safe infrastructures.

final thought

Trend towards edge computing In future, as well as expanding central data centres, many companies will be focusing more intensively on establishing decentralised IT capacities. The driving forces are, in part, modern Industry 4.0 (IoT) applications. Automated production facilities mean a large amount of sensor data has to be processed on site in real-time; data transfer to a central data centre would delay real-time processing and overload networks and legacy systems. And that’s not all. Many other Internet of Things (IoT) scenarios also need extra ‘edge’ data centres. These include networked households and smart homes, wearable fitness trackers and smart watches, as well as networked cars and IT infrastructures in smart cities. By 2019, 40% of IoT data could be processed and analysed by edge IT systems, the IDC analysts say. The new 5G mobile standard will also drastically increase the volume of data needing processing. At data transfer rates as fast as 10Gbps, for example, a movie can be transmitted in HD resolution in just a few seconds. Those wanting to run IoT infrastructures as part of fast 5G networks should make sure that the required server performance is provided at an early stage so that applications can use the full network capacity. Edge data centres can be used for this purpose. They enable the rapid and decentralised establishment of IT infrastructures to supply remote production sites or smart cities quickly with more computing power on a selective basis. But what makes an edge data centre stand out? They are turnkey IT solutions, which are modular and scalable, either as racks, or complete, within containers and they are suitable for companies of all sizes.

The components for cooling, power supply, monitoring and security are pre-installed and coordinated with each other; so an edge data centre can be created very quickly.

Higher energy efficiency with DC racks

“By 2019, 40% of IoT data could be processed and analysed by edge IT systems.”

That being said, central and homogenous, hyperscale data centres are still going to be needed. Hyperscale infrastructure is laid out for horizontal scalability to provide the highest levels of performance throughput, as well as the redundancy necessary for fault tolerance and high levels of availability. Operators, of course, have to optimise the running costs of their centres. DC racks are one solution, improving energy efficiency. But two new IT rack standards have become established on the market in the shape of the OCP (Open Compute Project) and the Open19. Inside the IT rack only one central power pack supplies the active IT components with DC power. This reduces the energy costs of each

rack by about 5%. Certainly worthy of consideration.

IT cooling concept trends Alternative energy and cooling concepts will further reduce operating costs. Electricity from renewable energy sources, with air or seawater cooling, is one example, as used by the Lefdal Mine Data Centre in Norway. This is a data centre built in a former mine. It is cooled with seawater and utilises electricity generated by from renewable sources. Companies can procure cloud services directly or operate their own private cloud systems. Energy recovery is another IT cooling concept for higher efficiency. This uses the waste heat generated in the data centre for building climate control purposes, in order to heat hot water. The technology itself is not new, but the aim is to develop a long-term strategy that exceeds the usual ROI calculation of three to five years. Rittal,

June 2018 | 47

TVS Diode SMAJ58

TVS Diode Array SP4044


PolySwitch PTC

COMPLETE PROTECTION SOLUTIONS FOR ETHERNET INTERFACES Littelfuse Solutions for Voltage Surge and Overcurrent Threats Ethernet is a Local Area Network (LAN) technology that was standardized as IEEE 802.3. Ethernet is increasingly being used in applications located in harsh environments that are subject to overvoltage events such as electrostatic discharges (ESD), electrical fast transients (EFT), cable discharge events (CDE) and lightning-induced events. The 10GbE and 1GbE versions of Ethernet are very sensitive to any additional line loading; therefore, any protection components and circuits must be carefully considered to avoid degrading Ethernet’s intrinsic high data rates and 100-meter reach capability. Littelfuse offers a variety of circuit designers overvoltage solution. For product and application guidance information please visit:

DCN June 2018  
DCN June 2018