Page 1


data centre news

April 2018

Made in Britain Penn Elcom talks to DCN about what it does, how it’s done, latest innovations and exhibiting at EI Live.

inside... Special Feature

Cables, enclosures, cabinets and racks

Centre of Attention Case Study

Eaton takes on a university challenge

ONI discusses how you could be getting ripped off by your cloud services

No Compromise. The only intelligent PDU you will ever need. Learn More


in this issue…

data centre news

April 2018

April 2018 Made in Britain Penn Elcom talks to DCN about what it does, how it’s done, latest innovations and exhibiting at EI Live.


Centre of Attention

Special Feature

Cables, enclosures, cabinets and racks

Case Study

Eaton takes on a university challenge

ONI discusses how you could be getting ripped off by your cloud services

SPECIAL FEATURE: cabling, enclosures, cabinets & racks 04 Welcome

36 Serverless Computing

Cyber safety.

The new paradigm that is ‘serverless computing’, explained by John Jainschigg of Opsview.

07 Industry News A challenging hiring environment lies ahead for tech executives.

38 Remote Monitoring

14 Centre of Attention

Jixing Shen of Schneider Electric discusses the benefits of 24/7 vision over your assets.

Kevin Kivlochan of ONI tells us why we’re being ripped off by cloud services.

17 Meet Me Room Jeff VonDeylen of Ensono talks industry changes, current projects and tackling challenges head on.

20 Case Study Eaton takes on a university challenge.

32 GDPR Mark Baker of Canonical on why it’s okay for businesses to trust public cloud services with their compliance needs.

34 Data Challenges Jeff Fried of Intersystems explains how to turn your data from challenge to opportunity.

40 Security IT security is complex. Andrew Lintell of Tufin highlights some manageable solutions that won’t damage your defences.

42 Projects and Agreements Direct cloud connectivity to the Nordics from Cinia and Megaport.

48 Company Showcase Geist future-proofs DC power management with its latest PDU platform.

22 Reduce downtime by

monitoring your DC environment. Richard Grundy of Avtech tells us how to spot the early signs of an issue.

24 Justin Ellis of Comms Express discusses why cabling is so crucial to future-proofing today’s data centre.

26 James Green of Equinix

explains why cabling in the data centre is the foundation of interconnected business.

28 Paul Mercina of Park Place Technologies gives us tips, tricks and how to trouble shoot when it comes to network racks.

30 Nicolas Roussel of Siemon

takes a closer look at the role that polarity plays in MTP fibre technology.

50 Final Thought Guy England of Lenovo on why the future is hybrid.

April 2018 | 3

data centre news

Editor Claire Fletcher

Sales Director Ian Kitchener – 01634 673163

Studio Manager Ben Bristow – 01634 673163

EDITORIAL COORDINATOR Jordan O’Brien – 01634 673163

Designer Jon Appleton

Business Support Administrator Carol Gylby – 01634 673163

Managing Director David Kitchener – 01634 673163

Accounts 01634 673163

Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173

The editor and publishers do not necessarily agree with the views expressed by contributors, nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2018. All rights reserved.

4 | March April 2018 2018


n today’s digital climate, cyber attacks and data breaches are all just par for the course. It is almost exactly a year since the WannaCry ransomware attack infected more than 230,000 computers across the world, with hackers demanding payments in cryptocurrency bitcoin in 28 different languages. The attack was spread by various methods, including phishing emails and on systems without up to date security patches. The attack of course most prevalently affected the NHS, hitting around 40 trusts. It was later revealed that networks had been left vulnerable because they were simply still using outdated Windows XP software. You’d think this attack would’ve quickly been stopped by the government, or a team of cyber boffins working tirelessly to put an end to the ransomware virus that caused chaos across the globe. But it was in fact a 22 year old self-taught ‘cyber genius’, working from his childhood bedroom in Devon, who ended up inadvertently saving the NHS. The accidental man of the hour wound up working with the government’s National Cyber Security Centre helping to prevent further attacks, but was later arrested by the FBI, accused of creating and distributing another malware virus. It seems the age old phrase ‘trust no one’ really does ring true when it comes to cyber safety. This month, it’s Facebook that’s been under the spotlight. And although the social media giant does a lot of things right, its two billion users put their faith in the

Claire Fletcher, editor

platform to keep their data safe. The company is currently under intense scrutiny over a data breach, where information of 87 million users could have been shared with Cambridge Analytica, a data mining firm, based in the UK. The UK firm and Facebook were already facing severe criticism when reports emerged that Cambridge Analytica had obtained millions of profiles of US citizens and used the data to build a software program to predict and influence voters for the election. Could this particular data breach be responsible for President Trump? It’s more than likely we will never know. With WannaCry causing the NHS to postpone critical operations and procedures, and Donald Trump now threatening World War 3, these two examples alone are very stark reminders of how easily a cyber attack could compromise not only privacy, but potentially human life. Why security isn’t top of the agenda for organisations of all sizes boggles the mind. For hackers, infiltrating systems is their prime focus 24/7 and they are incredibly good at what they do. So while organisations leave software unpatched, out of date and operated by staff uneducated in how to avoid an attack, then how do they hope to stand a chance against professional cyber criminals who are trained to always be one step ahead? Should you have any opinions or comments in relation to what organisations can do to protect themselves, please write to: claire.


The Global Leader in Technical Education for the Digital Infrastructure Industry Designing and delivering technical education programs for the data centre and network infrastructure sectors across the world


over education programs delivered each year

Associate College

education available in


countries across the globe UK Headquarters: +44 (0)1284 767100



city locations each year

TVS Diode SMAJ58

TVS Diode Array SP4044


PolySwitch PTC

COMPLETE PROTECTION SOLUTIONS FOR ETHERNET INTERFACES Littelfuse Solutions for Voltage Surge and Overcurrent Threats Ethernet is a Local Area Network (LAN) technology that was standardized as IEEE 802.3. Ethernet is increasingly being used in applications located in harsh environments that are subject to overvoltage events such as electrostatic discharges (ESD), electrical fast transients (EFT), cable discharge events (CDE) and lightning-induced events. The 10GbE and 1GbE versions of Ethernet are very sensitive to any additional line loading; therefore, any protection components and circuits must be carefully considered to avoid degrading Ethernet’s intrinsic high data rates and 100-meter reach capability. Littelfuse offers a variety of circuit designers overvoltage solution. For product and application guidance information please visit:

industry news

Over half of remote workers spend up to one day a week connected to unsecured networks

Latest research from Pulsant, a UK provider of hybrid cloud solutions, shows a lack of alignment when it comes to managing and maintaining compliance. Almost one in three IT decision makers don’t know which regulatory frameworks their organisations need to align to, and 33% see managing IT compliance as a C-level issue. In addition, IT decision makers identified a number of other challenges in IT compliance, including the 40% who see maintaining it as a major issue, while 43% say managing it is a problem. Lack of time, budget and skills, were also cited as challenges. These could be exacerbated by the fact that 39% of respondents in the survey use between 1-3 FTEs each month to deal with IT compliance, with 17% of budgets allocated to the task. “Compliance itself is a challenge that nearly every business faces, an endeavour compounded by lack of budget, understanding and resources. And it is not something that’s over once compliance has been reached; maintaining that compliance is another major challenge considering how quickly the market shifts, regulations change and businesses evolve,” says Javid Khan, CTO of LayerV, a Pulsant company. The research also showed that 83% of IT decision makers said there was room for improvement when it comes to the tools and technologies used in managing compliance. The most cited added features include real-time alerts, better reporting, open integration with other compliance tools, and more comprehensive monitoring capabilities.

A study by OneLogin, the identity management provider bringing speed and security to the modern enterprise, has found that UK businesses who provide their employees with the benefit of remote working are struggling to find a balance between productivity and security. In fact, over half of remote workers spend up to one day per week connected to unsecured networks, thereby leaving organisations open to a host of cyber threats. The findings are clear: VPNs are notoriously prone to breaking down, with 67% of businesses experiencing up to a week of VPN downtime in the past year, and 10% reporting that their VPN was down for more than a week. In short, a VPN doesn’t support productivity, it compromises it. Alvaro Hoyos, chief information security officer at OneLogin, commented, “With VPNs proving so unreliable, people are more likely to turn to potentially unsecured networks which could prove to be catastrophic for a business’ cybersecurity.” “This could be devastating as data breaches could leave confidential documents in the wrong hands and can be incredibly costly to remediate. By using next-generation mobile container technology, organisations can extend endpoint security from desktops to mobile devices and thereby enjoy a unified endpoint management solution.”



28% of IT decision makers unsure of their regulatory compliance needs

April 2018 | 7

industry news

Research recommends united action to attract more women in UK IT leadership A research project by the OU finds there are continuing barriers to overcome to enable women to be better represented in highly-skilled positions in UK Information Technology (IT), with lessons to be learned from their counterparts in India. The global IT sector is characterised by low participation of women, with the UK being no exception. Many attempts have been made to address the problem of small and falling numbers of women in IT education, training and employment

because narrowing the gender gap in high productivity sectors like IT will add £150 billion to the UK GDP forecast for 2025. However, these attempts have been met with little success. The ESRC-funded project – Gender, Skilled Migration and IT industry: A comparative study of India and the UK – took a novel approach towards addressing this problem, comparing the profiles of the IT industry in the UK with that in India.

It considered why the IT sector in India, in contrast to many places including the UK, manages to attract such a high proportion of women in highly-skilled roles. It also gained insights from migrant women and men who move between the two countries, and have experience of both cultures, to understand both the gender norms and the best practice in each country. Gender, Skilled Migration and IT,

Cryptojacking increases by 135% in the UK, data centres particularly at risk Cryptojacking – a technique used by cybercriminals to compromise large pools of physical computers, cloud infrastructures and legitimate websites to collectively mine cryptocurrency on cybercriminals’ behalf, is on the rise, according to a new report from Bitdefender. The more cryptocurrency has been mined, the more resource-intensive the process becomes. It is therefore expected that large data centres and cloud infrastructures are particularly at risk, as their elastic computing power enables

8 | April 2018

cybercriminals to virtually spawn and control large mining farms without paying any bills. Cybercriminals sometimes leverage remote code execution (RCE) vulnerabilities to deliver crypto-mining malware to targeted machines. Even the leaked EternalBlue NSA exploit used in 2017 to spread the WannaCry ransomware to more than 150 countries, was recently used to target servers to mine cryptocurrency. The rising number of attacks and newly employed advanced attack

techniques can penetrate data centres, private and public cloud, which rely on massive hardware infrastructures that crypto miners yearn for. Bitdefender’s research shows a significant increase in global crypto mining reports over the past four months (130.10%). This has coincided with a slight decline in ransomware reports in the UK. The total number dropped from 18.29% in September 2017 to 14.59% in February 2018. However, coin miners increased by 135.17% in less than six months. Bitdefender,

industry news

Research finds poor control over cloud services preventing organisations achieving cloud objectives New research from IDE Group has discovered that 76% of companies are unable to control their cloud services and shape them to meet changing needs. Associated with that finding, relatively few companies (26%) feel able to conclude that their cloud strategy has been totally successful in achieving their targeted objectives. IDE Group polled 100 IT managers at UK companies of between 500 and 5,000 employees, all of which had deployed at least one cloud service. Questions focused on the prevalence and significance of challenges in real-world cloud environments related to control of services. 64% have implemented a hybrid cloud solution, of which 68% don’t have full oversight and control of their cloud services.

When considering control of multiple cloud services, the biggest areas of concern are security and sovereignty of data (75%). At 55%, access to information is the second biggest area of concern and 93% would value a service giving effective control over multiple cloud services. “Maintaining complete control over cloud services is fundamental to aligning tactics with strategy and delivering the outcomes that companies want,” said Merlin Gillespie, strategy director at IDE Group. “Our research shows significant gaps in control, often originating right at the start of a company’s cloud journey and widening as their environment becomes more complex.” IDE Group,

Gartner: Worldwide IoT security spending will reach $1.5 billion in 2018 Internet of Things (IoT)-based attacks are already a reality. A recent Gartner survey found that nearly 20% of organisations observed at least one IoT-based attack in the past three years. To protect against those threats Gartner forecasts that worldwide spending on IoT security will reach $1.5 billion in 2018, a 28% increase from 2017 spending of $1.2 billion. “In IoT initiatives, organisations often don’t have control over the source and nature of the software and hardware being utilised by smart connected devices,” said Ruggero Contu, research director at Gartner. “We expect to see demand for tools and services aimed at improving discovery and asset management, software and hardware security assessment, and penetration testing. In addition, organisations will look to increase their understanding of the implications of externalising network connectivity.” These factors will be the main drivers of spending growth for the forecast period with spending on IoT security expected to reach $3.1 million in 2021 Despite the steady year-over-year growth in worldwide spending, Gartner predicts that through 2020, the biggest inhibitor to growth for IoT security will come from a lack of prioritisation and implementation of security best practices and tools in IoT initiative planning. This will hamper the potential spend on IoT security by 80%. Gartner,

UK tech executives see challenging hiring environment ahead About one-third of UK tech executives surveyed by CompTIA think 2018 will be moderately more challenging than 2017 when it comes to recruiting new technology workers. Another 43% of executives say 2018 will probably be on par with last year. “With employer demand for tech talent routinely outstripping supply, the year ahead will force more organisations to rethink their approaches to recruiting, training and talent management,” said Graham Hunter, CompTIA’s vice president for skills certification in Europe and the Middle East. Even with the uncertainty over hiring, UK tech executives are generally optimistic about business prospects for the year. CompTIA forecasts a UK industry growth rate of 5.1% for 2018, with upside potential of 7.2%. That’s in line with CompTIA’s global forecast of 5% growth. “In the report, we can see year-on-year growth in the UK IT workforce and this is matched in the growth we’ve seen amongst our membership,” commented Estelle Johannes, director, member communities at CompTIA. “This is set to increase as the influence of emerging tech such as blockchain, AI and AR/VR revolutionises the way the industry conducts business, which is why we are putting our core focus into helping our members transition to new ways of working and help the sector grow.” CompTIA, April 2018 | 9

industry news

Companies will need to spend 172 hours a month simply on data searches post-GDPR A significant number of EU businesses are sleepwalking towards massive penalties due to a lack of awareness of the scale of the General Data Protection Regulation (GDPR) data collection challenge. This is a central finding of a major report released by Senzing, a California-based software technology company. The research – Finding the Missing Link in GDPR Compliance – is based on the views of more than 1,000 senior executives from companies in the UK, France, Germany, Spain and Italy. It finds that, on average, a company will get 89

GDPR enquiries per month, for which they will need to search an average of 23 different databases, each taking about five minutes. The total time spent simply looking for data per month will be more than 10,300 minutes (172 hours) equating to over eight hours of searching per working day – or one employee dedicated solely to GDPR enquiries. The issue is even more pronounced for large companies. These expect to get an average 246 GDPR enquiries per month, for which they will need to search an average of 43 different databases, each taking more than seven minutes. They

will spend more than 75,500 minutes per month (1,259 hours) which equates to nearly 60 hours of searching per working day – or 7.5 employees dedicated solely to GDPR enquiries every day. Although 44% of companies say they are ‘concerned’ about their ability to be GDPR compliant – rising to 60% in the case of large companies – many businesses are demonstrating a dangerous lack of awareness about GDPR and overconfidence that they will not be affected. Senzing,

Average enquiries per month

Average number of databases

Average minutes per enquiry

Total minutes per month

Total hours per working day




5 minutes, 2 seconds


8 hours, 11 minutes




7 minutes, 8 seconds


59 hours, 56 minutes




4 minutes, 58 seconds


1 hours, 3 minutes




3 minutes, 2 seconds


9 hours



6 minutes, 3 seconds


19 hours, 30 minutes

All companies above 10 employees

Rush to the cloud bypasses security and exposes organisations to risk The majority (68%) of cybersecurity professionals working in large organisations in the UK say that a rush to the cloud is not taking full account of the security risks. This is highlighted by a new cloud security study conducted for Palo Alto Networks, the next-generation security company. The survey polled businesses across Europe and the Middle East that are actively adopting the cloud for their data, applications and services needs. It shows that cybersecurity professionals

10 | April 2018

recognise that they must do much more to match the pace of the business on cloud, but that security is too often viewed as a business inhibitor when new applications and services are adopted. Over a half (51%) of cybersecurity professionals in the UK report misalignment, between them and the rest of business, on cloud and cybersecurity issues, including cybersecurity’s role in making cloud adoption successful. Less than a half (39%) of respondents are very confident that existing cybersecurity in

the public cloud is working well, even for sensitive areas like finance. Only around 15% said they were able to maintain a consistent, enterpriseclass cybersecurity across their cloud(s), networks and endpoints. Indeed, around a half (47%) of UK respondents’ organisations say they take different, segmented approaches to cybersecurity today, but would like to have the same consistent visibility, command and control over cybersecurity across all areas. Palo Alto,

on the cover

Made in Britain Penn Elcom is a worldwide manufacturer and distributor of flight case hardware, speaker hardware and 19in racking enclosures, all proudly manufactured here in the UK. Offering extensive design and manufacturing capacities, Penn Elcom also offers bespoke, custom design products. The 19in manufacturer takes some time to talk to DCN about what it does, how it’s done, its latest innovations, as well as exhibiting at the upcoming EI Live! show next month. Facilities and services At Penn Elcom we believe we are a cut above the rest when it comes to 19in manufacturing. Not only do we manufacture all our racking in the UK, and have done so for 44 years, but we also design, prototype and dispatch from our facility. It is this centralised approach to our business that gives us complete control over not just pricing, but product quality and customer satisfaction. We have a talented and dedicated staff with an unrivalled knowledge base, and 12 | April 2018

our Hastings facility gives us the resources to create customised or bespoke 19in racking solutions. Our extensive standard range includes: standalone racks, wall mount racks, open tower racks, slide and rotate racks, server enclosures, flat pack racks, shock mount systems, rack cooling systems, cable management products, rack lights and numerous rack accessories. We offer value added services to any of our racks. This includes branded rack panels that can be either screen printed or laser etched

Penn Elcom manufactures, designs, prototypes and dispatches all its racking from its UK facility in Hastings.

on the cover

with a chosen combination of logos, text, and contact information. For designs that don’t require colour, we can laser etch either black or silver brushed aluminium panels. If you require a multi-coloured finish, then your desired design can be screen printed onto the panel. We can also powder coat your rack to a custom colour. We’re also able to produce bespoke solutions, and our many capabilities include; laser cutting, CNC punching, die casting, press brakes and powder coating. In 2017, we invested in a Salvagnini P4 automatic break press for our Hastings facility. The Salvagnini P4 folds blank steel and aluminium profiles, and automatically loads, forms, and unloads each product as efficiently as possible. The Salvagnini P4 benefits us due to its consistency and operational speed. The machine produces every product identically, keeping high quality consistent across our product range. The machine can be run 24 hours a day at full capacity with the minimal supervision of one operator. The P4 is also completely contained and is surrounded by a laser trip system that ensures no human contact can be made, therefore almost eliminating the risk of injury. However, not only our Hastings facility has benefitted from a prosperous year for Penn Elcom. Globally, we invested over £2.2min 2017 in more efficient machinery and pioneering technology to keep us at the forefront of the metal fabrication industry.

Our newest racking innovations Our extensive 19in range has recently expanded, and two new but equally important products have been launched in the last few months; the R4000 rack fan range and the RHF hinged wall bracket for wall mount racks.

“A centralised approach to business gives us complete control over not just pricing, but product quality and customer satisfaction.”

Our R4000 rack fan range is a new range of ultra-quiet fan trays, which are designed to be fitted to the top of one of our R4000 racks. These revolutionary fans run quickly and efficiently; but at an almost inaudible noise level.

These Swiss designed fans operate efficiently without compromise. Dust and high temperatures have less effect than standard bearings, making it the perfect choice for reliability - this fan range’s life span is up to five times longer than other comparative models. Our R6400-RHF is a hinged wall bracket that turns our R6400 and R6600 wall mount cabinets into hinged double section cabinets. The bracket is an unobtrusive fixture that can easily support the weight of a fully loaded rack, and enables it to securely swing away from the wall. This gives installation professionals unrivalled accessibility to the rear of the rack, making it easier than ever to run cables, replace equipment parts, and maintain your rack’s contents. The bracket is available for 6U, 9U, 12U and 18U racks.

We’re exhibiting at EI Live! Penn Elcom will be exhibiting at EI Live!, at Sandown Park, Surrey, May 9-10. We’ll be exhibiting a range of new products, and are excited to connect with other companies and our customers. Find us on stand 45 and 46! Penn Elcom, 01424 429 641 April 2018 | 13

centre of attention

Are you being ripped off? It’s time for a better, less complex cloud pricing model according to Kevin Kivlochan, director of sales and marketing at IT solutions provider ONI. Kevin not only believes it’s high time the business model changed, providing organisations a fairer system and better overall value, he’s also doing something about it.


e’re all used to hearing about the virtues of the big players in cloud computing, namely AWS, Google and Azure. Let’s face it, they must have done something right to be so dominant in the market. But what was once positioned as a simple and easy to embrace proposition when compared to an on-premise alternative, has now become quite frankly, a mathematical nightmare that primarily benefits the suppliers and not their customers. While a very simple and basic cloud service setup can be appealingly low cost (even free), which is great for startups and small businesses, more moderately sized and larger organisations are discovering that this low bar to entry ultimately comes at an unexpected price. If you’re at all familiar with cloud pricing models, you’ll know that they can go from simple to extremely complex in less time than it takes to spin up a Linux server. It is often these almost impenetrable pricing systems that are making business leaders and financial officers struggle to keep track all of the costs and charges. Little wonder why they question whether the cloud is really worth all the effort in the first place.

14 | April 2018

Choices, choices In the good old days (circa 2006), pay-as-you-go cloud services, most notably AWS, were a relatively new, ground breaking and featurelimited thing. Options were limited to a handful of services and prices were favourable. You got to pay for what you used without buying the whole farm, so to speak. Fast forward to the present and you’ll find that AWS now offers a staggering 200+ products across 70 services. Not so simple anymore. But don’t worry, AWS have their very own ‘Simple Monthly Calculator’. So simple in fact, that there’s even a new growth market of third party tools to help you make the most of it. Inevitably, if you are going to offer such a breadth of products and services, simplicity was going to be one of the first things thrown out the window. Google and Azure aren’t too far behind either. They all have a myriad of options and more recently, a series of pricing incentives that cover a whole host of lock-in scenarios including reserving future usage pricing and usage discounts when thresholds rise above certain percentiles. It’s slightly ironic to think that while compute, storage and other cloud prices are dropping through mass adoption and economies of scale, more and more businesses are experiencing a disturbing rise in overall costs. At least, that’s what many CFOs seem to think.

The best of all possible worlds But let’s assume for the moment that the current wisdom of moving a substantial amount of your business services to the cloud – hybrid or otherwise – is the right course of action. How do you work out your best option? After all, an identically specified server on AWS, Azure or Google is still an identically specified server, is it not? Think of it a bit like a utility such as electricity or gas. Have you ever found yourself thinking that the electricity/gas supplied to households in the south

centre of attention

“The cloud is still clinging on to most of its historic pricing models.”

is of far better quality than those in the north? Whilst I accept that there are many more choices of cloud provider than there are consumer utility equivalents, the similarity between the two services is not a redundant one. Indeed, today as consumers, we are seeing governmental intervention to improve service, value and price transparency. For the very same reasons we currently face with cloud services – the pricing/value model for most businesses has gotten so complex

and confusing, we really don’t know how to spot a good or a bad deal until it’s far too late. It’s high time that cloud services got a simpler costing model.

What price paradise? At ONI, we have introduced an open, easy to understand and price transparent tariff system for all of our cloud services. Comprising just 12 product lines, what was once a complex service selection minefield is now easier

to understand and fairer to cloud customers. That’s it, a dozen options that cover everything you’ll need for your hybrid or cloud solution today and more importantly, tomorrow. To explain this approach with some context, do you remember when mobile phones first came out? Back in the day, when they started becoming mainstream, it was commonplace to pay as much as 50p per call and 10p per text. What a rip-off! But we all paid it and pretended it was okay. Can you imagine any company trying this on today and succeeding? Yet the cloud is still clinging on to most of its historic pricing models. Models where you still pay for both your ingress and egress usage. Models where pretty much everything is billed individually, down to the minutiae. Is anyone actually surprised that many businesses are opting to do things for themselves, preferring bare-metal solutions and bringing services back in-house? It’s not what they really want to do, it’s driven by hard economics. Just like the mobile industry, change is inevitable in the cloud. So why wait? Why not force the change now? ONI has reached that point where we need to positively respond to growing customer demands and simplify our solution offerings, by adopting a model that is transparent, reasonably priced and all-inclusive. Don’t get me wrong, there’s no one size-fits-all solution out there – no more than there is for mobile phones. You don’t however need hundreds of selectable add-ons to realise your perfect cloud solution. Are we doing anything revolutionary here? I don’t think so. We’re addressing an issue that should have changed some time ago. Are we doing the right thing for customers? Absolutely. Oni,, 01582 429 999 April 2018 | 15





The Smart Building & Home Automation Trade Show







The EI Live! show is the only


dedicated custom install and


home automation exhibition in the UK, catering for every smart building professional’s requirements and purposely designed to give you a truly rewarding and informative show.

9th - 10th May 2018 Sandown Park, Surrey Learn more at

Untitled-2 1

21/03/2018 09:38

meet me room

Jeff VonDeylen – Ensono Jeff VonDeylen, CEO at Ensono – voted Sunday Times 100 Best Companies to Work for – discusses changes he’s seen within the industry, current projects and how to deal with challenges head on.

What were you doing before you joined Ensono? Before Ensono I was president of Savvis, a data centre managed service company acquired by CenturyLink in 2011. We operated as a subdivision for a time before I left. Looking back on your career so far, is there anything you might have done differently? I’m really happy about the variety of experiences I’ve had. I’ve been challenged by mentors and leaders that got me out of my comfort zone and provided me new experiences and opportunities, particularly internationally. One of the best pieces of advice I ever received was that just saying that you can work on an international level, doesn’t mean you know how to, and I’m glad I learned that early on. What are the biggest changes you have seen in the data centre industry? Companies are recognising that continuing with business as usual is no longer working. People are conditioned to resist change, especially at the largest companies, meaning that change is necessary for improved results. Now, as disruption to business models occurs across industries, companies have begun to recognise they need to be more open to change, or risk being overrun by the competition. Business leaders are now also much more focussed on doing the things they are good at – the things that provide them strategic

differentiation. While they ‘can’ run their own data centres and IT environment, now they will ask themselves exactly what the benefit of that is. Some will start experimenting with data centre components, some with other small things, until ultimately, they begin taking advantage of economies of scale that specialist services providers can deliver. Can you tell us about any projects you are currently working on? One of the things that the senior leadership team at Ensono is working on right now is a long-term view of the business, and precisely planning out our roadmap for the next three years. Having this timeline clear in everyone’s minds means we as a company can constantly make sure we’re moving in the right direction, rather than just drifting. It has also allowed us to realign our client centric focus.

“Our best experiences almost always come from the toughest conditions.”

outside of work. This is true of client relationships too, it’s more than just making them successful at work, it’s about building relationships that matter on multiple levels. How would you encourage a school leaver to get involved in your industry? What are their options? The great news is that I don’t think IT and technology is limited to a specific skillset or a certain programme. You could be a financial person, an engineer, a

In addition to earning a living, how else has your career created value in your life? Life and work are about connecting with people. The biggest value my career has brought me is working with people I respect and care about from across the globe, working across cultures and working together to deliver something special. Along the way, you develop not just outstanding professional relationships but personal ones, knowing each other’s families and wanting to support each other both in and April 2018 | 17

meet me room

salesperson, but when it comes to getting into tech, your course isn’t as important as the person you are, and what experience you have. The IT industry is constantly throwing up new opportunities, and its non-static nature means it creates new opportunities all the time. Naturally experience, and particularly diverse experience, is hugely important starting out.

Jeff believes experience is most important when it comes to young people getting into tech

Are there any changes in laws or regulations that you would like to see that you think would make your job easier? Well aside from the obvious with GDPR, it certainly would be useful for the different standards across countries to be unified. Ensono works across the globe and understands different cultures. Sometimes, there are standards that could easily become uniform, which would make it easier for our enterprise clients operating globally. What part of your job do you find the most challenging? The biggest challenge is also my biggest joy. Finding associates that wake up excited about what they are doing and keeping them motivated. We work hard at Ensono to create the right environment for this, and keep a constant eye on that success with regular feedback and surveys on how they’re feeling. There’s always a slight anxiety that we’re not succeeding as much as we could be, but independent awards such as Sunday Times 100 18 | April 2018

Best Companies to Work For has kept me believing that we are doing a good job of it.

you can drop someone into a new scenario and that they will excel at it is one of the best parts of my job.

What is the toughest lesson you have ever been taught in your career? That our best experiences almost always come from the toughest conditions. It happens very rarely, but some of our best relationships have stemmed from when something has gone wrong. The difficulty is going through the steps of communicating what has gone wrong, owning it, and then working harder than ever to get it fixed. Most of the time, when that happens, clients can see that we really, genuinely care. We’ve grown to be a part of their team, which means we can come out on the other side stronger. That would be the lesson I’d be likely to give others now. When something goes wrong, don’t hide, don’t try to duck it, own it.

If you could possess one superhuman power, what would it be and why? Invisibility. As a leader, it would be amazing to not just have the perception of how people think, but to actually hear how they feel. To continuously improve our service, I want real feedback and this would be the best way to avoid the watered-down version you receive when people want to be nice. I’d swear to always use it in a positive way of course.

What gives you the greatest sense of achievement? There is no greater thing to witness than someone you work with achieving something they didn’t think they were capable of. At Ensono, we have built a culture that gives people the opportunities to succeed, to push themselves and thrive. Watching this gives an enormous sense of achievement to me. It motivates me to see people developing their careers around me. Spotting this early on is a huge pleasure, and understanding someone well enough to know that

Scotland is one of Jeff’s favourite holiday destinations because of its golf courses, coastline and most importantly, the pubs!

Where is your favourite holiday destination and why? Scotland. I like how you know every day is going to about 15 degrees, and it has the best golf courses in the world. I love the coast, love the water, love the small pubs everywhere. St Andrews is a personal favourite spot. Can you remember what job you wanted when you were a child? When I was younger I actually wanted to be a truck driver, and also a pastor. Not the most compatible of jobs I know! I was going to do a lot of things in my mind because I felt like one thing would not be enough. Sport-caster was going to be another one, because I could recall such obscure sports facts at the drop of a hat, and I think that’s been the one where the most skills have crossed over into my role at Ensono.

next issue

design & facilities management

Next Time‌ As well as its regular range of features and news items, the May issue of Data Centre News will contain major features on design and facilities management. To make sure you don’t miss the opportunity to advertise your products to this exclusive readership, call Ian on 01634 673163 or email

data centre news

April 2018 | 19

case study

University Challenge When it came to finding a new UPS provider, necessity was the mother of reinvention for the University of Winchester, who selected Eaton for its IT network management requirements.


ometimes the warning signs of an IT failure only become obvious when there’s a serious problem – not just applications going down, but when you can actually smell that something’s wrong. It’s an IT manager’s worst nightmare when a network failure requires urgent maintenance, especially when the faulty equipment itself is part of a disaster recovery solution. The University of Winchester came close to an Uninterruptable Power Supply (UPS) environmental issue when its existing UPS systems failed without warning,. The first sign being noticeable fumes coming off the batteries in the UPSs that were housed in student villages across the University and protected the Edge IT infrastructure; there to ensure availability of IT services to students around-the-clock.

20 | April 2018

Getting pro-active

“New solutions needed to be compatible with its virtualised environment.”

The university’s IT team had a seemingly simple decision to make: find a new provider for its UPS requirements. Crucially, the university wanted to avoid any future scenarios where this could happen again, so it needed its new UPSs to enable it to monitor the condition of the batteries and provide pro-active diagnostics. Alongside this, it wanted to take the opportunity to bring in power management software that could be integrated with its existing virtualised environment, run on VMware, so that the entire estate could be managed through a single pane of glass. The University of Winchester traces its origins back to 1840, it now caters for approximately 6,500 students, and was ranked tenth for teaching excellence in 2016. Over the last decade, it has

invested heavily in its facilities and infrastructure, including the development of a new teaching block and three student villages as part of its commitment to building a modern IT infrastructure by way of a virtualised environment. According to Sean Ashford, network and systems manager at University of Winchester, “The IT network is a really important part of a modern university; we provide a service to students and teaching staff and it’s hugely important that the network gives guaranteed uptime for them to do online research and study whenever they require. We also need to ensure that there’s no risk of data loss that could ultimately impact a student’s grades. But we’re not just a place of study, students live here and we need to ensure quality of service in the network in their downtime too.”

case study

A superior solution When it came to choosing a new solution, the university had made a conscious decision to avoid like-forlike product replacements, instead wanting to install better solutions that would bring additional value and reassurance, notably in management and monitoring of the overall solution. The key challenge facing the network upgrade was that a new UPS in the data centre would have to integrate with legacy equipment, while new solutions needed to be compatible with its virtualised environment, managed by VMware’s vCenter. This would enable the university to have better predictability around when maintenance should take place, ideally being able to ensure this was always done outside of term time. Sean notes that, “Eaton’s software was a crucial point of differentiation in the market.” Eaton provided the university with its Intelligent Power Management (IPM) software to support business continuity across its entire estate. The software enables the university’s IT team to manage its mission critical applications across the network from a virtualisation dashboard, which decides which ones can be left running, which ones to shed load and stop, or to limit the power to certain applications. Virtual machines in the university’s network can be shut down through Eaton’s IPM, with the restarting controlled by vCenter virtualisation management software, meaning both shut down and restarting are performed in a manner that minimises downtime and eliminates the risk of data loss. From a hardware perspective, Eaton installed 50 5PX single phase 3kVA units across the campus, each providing a critical runtime of up to 20 minutes in the event of a power failure, long enough for back-up systems to come online to prevent any data loss or corruption, or to

ride out short power outages with no loss of functionality. Eaton’s 5PX UPS batteries have a design lifetime of five years, and the way it charges batteries enable them to outlive competitors by 50% longer. Alongside the 5PX units, Eaton also installed two 9SX 5000VA UPSs, to support higher power applications at the network edge and a 93PM UPS in the University of Winchester’s data centre. The 93PM is a 50kW power module with an internal battery cabinet, which has an incredibly small footprint, to achieve space saving in the data centre. It ensures long term, reliable and uninterrupted operation of the university’s IT equipment, protecting it from failures and long power outages, and has an LCD touchscreen display that provides essential status information at a glance in both graphical and numerical formats.

Added value Eaton and the University of Winchester selected products based on how much additional value the university could derive from them. All the UPSs installed at University of Winchester are new products that have been released to market within the last three years, which ensures that they have the best efficiencies in terms of power and running costs, as well as providing as much useable power as possible. On top of that, the university has been able to sweat the value of the Intelligent Power Management software, to help it manage its existing systems from a single pane of glass, and help it move towards a virtualised environment. Considering the previous issues with battery life, decisively for the university’s IT team, Eaton’s UPS units also feature AMB battery management technology that enables proactive diagnostics of battery life, giving the IT team up to

60 days’ warning ahead of a battery’s end of useful life, which is enough time to hot swap the battery without switching off any IT equipment. Given its experience, this gave the University of Winchester’s IT team huge peace of mind. Sean comments, “We’re a relatively lean IT team and we’ve been able to engage Eaton as an extension of the team through a five-year support plan, which sees Eaton providing remote monitoring and preventative maintenance visits, which give us additional proactive support and reassurance in managing and maintaining our UPSs.”

Room to grow The University of Winchester now has an infrastructure solution across its data centre and network that is more resilient, futureproofed and ready to grow with its needs. Since implementing the Intelligent Power Management software, the university has been able to get greater insights into what’s happening with its IT suite and at a power level – in fact, it has tracked that the UPS units have kept its IT running on a vast number of occasions, despite over 1,500 power events. The pro-active diagnostics of the ABM technology also enables the university to plan IT maintenance outside of term time, further increasing uptime. Sean concludes, “Eaton has been a really important partner in helping to bring the university’s IT infrastructure up to standard. The quality of service from the UPS systems was a critical factor in our choice of supplier, but more than that, it’s the support that Eaton brings in enabling us to monitor and manage our systems so that we know exactly what is going on at any point in the network.” Eaton, April 2018 | 21

cabling, enclosures, cabinets & racks

Warning Signs Richard Grundy, president and COO at Avtech, discusses the importance of monitoring your data centre environment to reduce downtime, and how to spot the early warning signs that something isn’t quite right.


ata centres and computer rooms are the lifeblood of most organisations. Whether their primary focus is supporting the organisation’s data and infrastructure or providing services to clients, any downtime they suffer will have a significant negative impact. It’s important to have a clear view of all the factors that can potentially lead to data centre downtime. Outside and inside forces are often monitored with security features such as firewalls, network intrusion detection, keycard entry, alarm systems,

22 | April 2018

and IP camera surveillance. With so much focus on cybersecurity, many organisations often overlook the fact that 30% of their downtime will be caused by environmental factors that they are not monitoring. Installing environment monitors and sensors within your data centre racks and cabinets will help your organisation spot data centre issues before they lead to costly downtime. If you are not currently monitoring any of the following environmental factors, it’s critical that you consider installing environment monitors in your data centre or computer room.

Monitoring power to maintain uptime Power loss is one of the most critical factors that can lead to data centre downtime. While most data centres have UPS batteries to protect servers in the event of main power loss or fluctuation, they often are in place to provide time to gracefully power servers down, not supply continuous power in the event the main feed is lost. Backup generators are often installed in larger data centres and colocation facilities. Often, smaller organisations that maintain a small local data centre or server room may not be able to afford

cabling, enclosures, cabinets & racks a generator; when they do have one installed, it may need to be manually engaged. Monitoring power at both the server and rack level lets you know if power to your servers and appliances is lost. Newer PDUs in racks and cabinets sometimes have power or temperature monitoring built in, however if your servers are located in a third-party data centre, receiving notification from your own monitors may mean the difference between a graceful shutdown and complete hard drive failure.

Temperature High temperatures can quickly lead to data centre downtime, and in many instances are caused by HVAC failure. In fact, the most common reason a new customer contacts us, is due to downtime caused by high temperatures when their air conditioning has failed. By that point in time, hardware and data are almost always lost, and recovery times are long and costly. Power loss can also be a cause for high temperatures, since the HVAC likely turns off when power is lost, but computers and servers continue running and generating heat on UPS power. Installing temperature sensors at various heights and locations within your racks and cabinets will allow you to monitor temperature in multiple locations within your data centre or server room. If temperatures begin to rise, instant alerts will let you know that you are potentially on your way to a major problem. Being able to proactively address a failing HVAC will allow you to prevent downtime and damaged hardware, saving you thousands in damages, downtime, and lost productivity. Rising temperatures could be an early indication that a HVAC needs service, or is starting to fail. It could also be caused by increased heat load following

new equipment installations as usage increases. Don’t let rising temperatures cause downtime in your data centre. If you are colocating in an outside data centre, installing dedicated temperature monitoring in your rack or cabinet will remove your reliance on your provider’s monitoring, and give you instant alerts if temperatures become too high.

Humidity Humidity is sometimes an afterthought when it comes to environmental factors to monitor; however, it can be just as damaging as high temperatures. Too much humidity in your data centre, and condensation will form on sensitive electric devices. Too little humidity can lead to the risk of static discharge, which can be just as damaging to hard drives and circuit boards. Monitoring for humidity fluctuation will alert you when factors exceed the thresholds you determine. Excess moisture within your rack will ultimately lead to server and appliance failure, costing your organisation an untold amount of lost revenue and productivity.

Water or flood Nothing can bring down a data centre faster than a water leak. Whether it’s extreme weather, a leaking roof, a clogged pipe, a construction accident, or any other unexpected issue; water coming into contact with servers and network appliances will bring your organisation down in an instant. Far too many organisations don’t think they need to monitor for ‘flood’ conditions, thinking that their proximity (or lack thereof) to larger bodies of water keeps them safe from water damage. In fact, organisations are almost 10 times more likely to suffer from

“Power loss is one of the most critical factors that can lead to data centre downtime.”

water damage than they are from fire damage. Everyone has smoke and fire alarms, however the percentage of organisations who proactively monitor for water leaks is shockingly small. Installing flood and water leak sensors can easily monitor multiple areas in your data centre to alert you to the presence of water as soon as it’s detected. This inexpensive and reliable solution will potentially save your data centre from outright disaster if an HVAC condensate pump fails, a pipe bursts, or an accident causes water or sewer backup that enters your facility. Pairing water detection with inexpensive, yet effective, water control products such as quick berms or water gates around your racks and cabinets, can help keep water leaks at bay while you are being alerted to an issue.

Spot issues before they cause downtime Data centre downtime isn’t a matter of ‘if’, it’s a matter of when. Over 60% of data centres say downtime was due to environmental issues at least once in the last 12 months. Don’t become another statistic. Installing dedicated proactive environment monitors and sensors in your racks and cabinets will help you quickly be alerted to factors before they lead to downtime issues. Avtech, April 2018 | 23

Cabling is Key Justin Ellis, senior data centre specialist at complete network supplier Comms Express, discusses why cabling is so crucial to future-proofing today’s data centres.


s we stand on the brink of a technological revolution which is changing the way we live our lives and how we work and communicate, the most successful enterprises will be those that take the lead in embracing digital transformation. The future-proofed data centre is rapidly becoming a reality for this industry, holding the key to harnessing greater flexibility, efficiency and scale. By 2020, there could be upwards of 20bn networked connections and devices. Data traffic will continue to explode; we need to rethink data cable designs and make sure

24 | April 2018

that our infrastructure is ready to meet this challenge head-on. Let’s take a look at some of the reasons why structured cabling gives a data centre a solid, futureproofed foundation.

Network infrastructure optimisation As industry leaders prepare to move their companies into this technologically advanced new world, the optimisation of data cables is a necessity rather than a luxury, regardless of whether they decide to upgrade their existing on-premises facilities

or look to cloud based solutions. The demand for streamlined, fast connectivity for apps, storage, desktops and servers is critical and relies upon optimised cabling. A popular approach to this involves the standardisation of data centre designs and components, including fibre and copper cables, cabinet racks and enclosures, to give an accessible and organised system. Customers expect a seamless, fast experience and the goal here is to create data centres which can accommodate the relentless march to higher and higher data transmission rates and capacity requirements.

cabling, enclosures, cabinets & racks designed for maximum efficiency in the context of electricity usage, capacity and so forth. The four common types of virtualisation services are server, network, desktop and storage. New virtual networks can be created or added as an overlay to pre-existing network topology.

also helps with the scalability of resources and data, the reduction in the total cost of new infrastructure and improvements in long-term profitability. Modular data centres that have been specifically designed in this way are portable and consist of unique pre-engineered components and modules, with basic cable infrastructure forming their foundations.

Shift in the network architecture of data centres

Growth in the cloud and virtualisation services With growing demand for agile IT and the widespread adoption of cloud based networking alternatives, business leaders are looking to invest in seamless cloud solutions. This requires the integration of state-of-the-art data cable infrastructure and a renewed focus on flexibility, security and higher transmission speeds. Virtualisation technology also continues to provide a cost efficient and scalable networking option. This requires a robust network of cables for uptime, routing and forwarding, which can be

The evolution of networks which feed data to the data centres will facilitate new Internet of Things (IoT) applications, but at the same time bring fresh challenges. The next-generation (5G) mobile network is a hot topic at the moment. Data centres will be required to support the features of the 5G network which are optimised for mission-critical and IoT applications for an increasing number of customers demanding ever greater speeds. Data centres will be under greater pressure to deliver their services in real time and efficiently process and store data. IoT applications collect both low and high-bandwidth data from a very large physical area, and the need to access and process this mass of data is driving changes in the network’s physical layer. Spine and leaf architecture is becoming important in reducing latency and improving scalability.

New data centres designed for modularisation As new data centres are built to meet demand for superior network capabilities, there is a trend developing whereby organisations are getting involved in the construction phase. This allows them to have an input into key decisions regarding futureproofed modular network cable solutions. This early involvement

“The demand for streamlined, fast connectivity for apps, storage, desktops and servers is critical and relies upon optimised cabling.�

Energy consumption and efficiency As energy costs increase and environmental sustainability becomes ever more important, it is essential that data centre operators and owners prioritise initiatives to improve their efficiency and energy consumption and opt for renewable energy sources where possible. This will help preserve the resilience of the grid and the overall availability of energy. Structured cabling gives a data centre a solid, future-proofed foundation for a capable, scalable, efficient and cost effective network infrastructure, that can support critical systems and adapt to new technologies. By planning early, a data centre can ready itself for optimum provisioning and high efficiency and utilisation. A future-proofed data centre of this type will be ready to meet key challenges including cooling, power, security risks, budget and physical space as we see a large-scale movement to cloud based centres and fully virtualised network architectures. By optimising their upfront investment, operators can design their facility to meet application demands and exceed customer expectations for around 25 years, thereby ensuring a bright future for their business. Comms Express, April 2018 | 25

cabling, enclosures, cabinets & racks

Solid foundations James Green, senior sales engineer at global interconnection and data centre company Equinix, discusses why cabling in the data centre is the foundation of interconnected business.


he world is changing. The way we work, live, shop, consume and communicate are all radically different than just a few years ago. Connectivity is exploding beyond our wildest imaginations – even conservative estimates suggest we will have up to 30bn ‘things’ connected online by 2020. For businesses also, there is a revolution in connectivity underway – not of devices, but of data. We are seeing a marked increase in enterprises connecting directly with each other in colocated data centre environments, allowing them to bypass the public internet and create digital ecosystems between

26 | April 2018

the world’s leading companies and their customers, partners and supply chain. According to research carried out by Equinix last year, this interconnection between companies will grow to over 5,000tbps by 2020, more than a four-fold increase since 2016. And it is unlikely to stop there, with continued appetite for connected devices, increased end-user demand for speed and the neverending task of ensuring data remains safe and secure. Against this backdrop of rapid change and growth in the digital economy, it would be easy for the data centre industry

to forget about the basics. This huge increase in interconnection requires not only investment in new data centre space, but in the physical infrastructure within those data centres. Modern data centres are some of the most technologically advanced buildings in the world, with uninterruptible power sources, infallible failsafe systems, and security measures that rival even the most advanced military facilities. And with direct connections between companies housed in their facilities rising exponentially, cabling is going to become increasingly vital to the long-term success of our sector.

cabling, enclosures, cabinets & racks

Design is key Cable management is one of the most important aspects of data centre design and operation. If data centres are at the heart of the digital economy, cables are the arteries. A data centre’s performance, reliability and flexibility are all influenced by how well the cable infrastructure is designed and executed, so it is not something to be taken lightly. As cabling demands increase, effectively planning pathways will be an incredibly important consideration. At this stage, it is worth noting that there is no one-size-fits-all solution. The best approach to take is to build flexibility into your design, with different systems that can be flexed and changed along with requirements. Including both a ladder system overhead, as well as solid metal trays below a raised floor, gives this flexibility and potential for growth. Even simpler than this, engineers need to ensure cables aren’t stressed, twisted or stretched, as this will undermine longevity and performance. One way of encouraging this is by setting the standard for tidy racks and cable systems. It may seem inconsequential but very often, if one thing slips – such as using

a longer cable than necessary, or the wrong colour coating – the whole system becomes quickly more complicated. With the vast number of connections required to keep today’s digital business afloat, cabling must be immaculate or problems will occur and undermine carefully designed infrastructure.

Maintaining uptime Point-to-point cabling is handson by nature and with 48% of data centre outages caused by human error, it is important to minimalise risks. Mistakes can easily be made with multiple, unorganised cabling structures, leading to workflow disruptions and network downtime.

“If data centres are at the heart of the digital economy, cables are the arteries.”

To reduce the risk of errors, you need to enable effective maintenance. The system should be designed for easy access and ability to scale, supporting future cabling and hardware installs. Another big challenge is around airflow and cooling. Point-to-point cabling means lots of extra cable, which in turn encourages poor airflow, more heat, higher hardware failure rates and increased cooling costs. Structured cabling systems can reduce cable volumes by 75%, which can help by improving airflow – but you may also need to consider additional cooling in areas of dense cable networks.

False economy Establishing a well-designed, structured cabling system in a data centre environment requires some up-front investment, which some companies aren’t willing to countenance. But opting for more basic solutions is a false economy – the time and money spent rectifying issues with inadequate cabling infrastructure far outstrips the initial investment required to do it right in the first place. Not only this, the remarkable growth in direct connection between enterprises in colocated environments means data centre companies cannot afford such shortterm thinking. Interconnection is an essential business enabler, and the right interconnection strategy can help make even the boldest business ambitions a reality. Our industry needs to be prepared to seize the opportunity offered by the huge growth in data flow and interconnection – data centres will become increasingly vital to the global economy – and that means having the physical infrastructure in place to scale up rapidly and sustainably. Those that don’t will risk being left behind. Equinix, April 2018 | 27

cabling, enclosures, cabinets & racks

Getting It Right Networking racks are an integral part of the data centre and there is often more to them than meets the eye. Paul Mercina, head of innovation at Park Place Technologies, specialist in support solutions, gives us some tips and tricks, as well as how to trouble shoot common problems.


roper housing of IT equipment facilitates routine maintenance and contributes to overall uptime. Securing expensive server, storage, and networking gear in racks helps protect it from physical harm, while enabling organised hardware placement and well labelled cabling. Server enclosures tend to get all the attention, but network cabinets and racks are an integral part of the data centre or IT closet. These solutions house routers, switches, and other networking equipment and accessories. Compared with their server counterparts, they are shallower – usually 31in or less in depth – and often lack high-end cooling systems, as networking equipment doesn’t run as hot as other IT gear.

28 | April 2018

Networking racks may seem simplistic, but there are still challenges for IT pros in balancing cost savings with best practices in their networking backbone. Open-framed racks, second hand purchasing and consideration of non-capital costs are all factors to be aware of when finding cost savings. Common issues, such as overheating, failure to pay attention to installation and grounding requirements, as well as ‘cabling spaghetti’ can all be overcome by paying attention to the details. Taking the next step to a full enclosure with roof, side panels, and built-in cooling and other features can provide the right operating environment for delicate IT gear, especially in high-density installations. The following are some tips for getting this often-overlooked part of the data centre right.

Finding the cost savings When provisioning new racks for the enterprise’s networking components, cost is an obvious consideration. Here are four ways to get the price down: 1. Shop around It goes without saying, comparisons among vendors will be key in finding the best price. Be sure to ask about volume discounts or other deals, especially from suppliers for which you are a repeat customer. 2. Stay open A typical open-frame rack is the most economical choice, at about one-third the cost of a rack enclosure. There are downsides, namely no rack-level security (locks), and less physical protection from moisture, dust, and other debris. For a location

cabling, enclosures, cabinets & racks

Common problems

that is already access-protected and clean, however, open-frame racks are a fine – and much more affordable – choice. 3. Buy used Most IT gear can enjoy a longer lifespan than many IT leaders expect, all the more so when it comes to networking racks and enclosures. Buying used is a great way to save money or get more tricked out enclosures than the business might be able to afford brand new. 4. Consider non-capital cost Don’t limit the calculations to immediate price point. Sometimes a few extra pounds can lower TCO (total cost of ownership). Tool-less mounting can slash engineering time, while taller racks can make the most of limited floor space. It all adds up.

Problems with networking racks generally derive from improper installation choices than from equipment choice. Once you have the racks or cabinets you need, attend to the details to ensure the equipment will remain safe and perform well. Common errors associated with networking racks include the following: • Being closed minded. Fully enclosed cabinets (without perforations) are meant for use where fresh air will be pumped directly into the enclosure. Even networking equipment can overheat if such cabinets are used with whole-room A/C only. For such applications, select enclosures with perforated doors or open racks. • Getting tipsy. Different racks have different installation requirements. Some need special floor mounting. Wall-mounted racks must be adequately fastened. Failure to follow the instructions, or ensure that the wall or floor can accommodate the weight or installation hardware can result in equipment damage. • Timber! Free-standing enclosures can tip over. Increase stability by installing heavier equipment towards the bottom of the rack and lighter gear up higher. • Oops. Where did this go? Missing and loose screws puts equipment in jeopardy of physical damage and poor connectivity. • Cutting it close. Cisco recommends a minimum of six inches between chassis air vents and adjacent walls and a minimum horizontal separate of 12 inches between chassis. Tighter spacing can cause heat-related problems, so give your networking equipment room to breathe.

“Server enclosures tend to get all the attention, but network cabinets and racks are an integral part of the data centre or IT closet.”

• Zap! All too frequently, racks are not connected to the building’s ground. This poses a serious risk for personal injury, as well as operational deficiencies and failures in networking equipment caused by stray current. Some systems also have additional grounding requirements, which should not be overlooked. •C  abling spaghetti. This is likely to block air circulation and could mix network and power runs together, causing interference. To avoid these problems, plan the cabling as carefully as you do your data centre design. Bunch, tie, and label cables appropriately. Neat cables aren’t just a benefit for the next person who has to trace a wire – they facilitate air flow and ensure performance, too. •P  aint problems. All paint used on networking racks should be non-conductive. Racks from any reputable manufacturer should be fine out of the box. But for spot repairs, be sure to use the right paint. Also do your due diligence when purchasing used equipment. Once you have your networking cabinet or rack installed, don’t overlook basic maintenance tasks, such as inspecting and replacing surge protection equipment. To keep equipment operating at peak performance, it’s essential to have an established maintenance programme, including employeetraining and task-tracking components, as well as a reliable partner for spare parts and engineering expertise. Following these guidelines will help ensure that your networking rack/ cabinet installation is safe and will promote a long life for the hardware it contains. Park Place Technologies, April 2018 | 29

cabling, enclosures, cabinets & racks

The Polarity factor Nicolas Roussel, technical manager at IT solutions company Siemon, takes a closer look at the role that polarity plays in MTP fibre technology.


hen data centre managers consider the deployment of a new fibre optic cabling infrastructure, the choice often falls on pre-terminated structured cabling. Most of these installations use the MPO/MTP connector (commonly referred to as the MTP connector) which enables a quick and reliable installation whilst offering a ready-made upward migration path from 10 to

30 | April 2018

40 and 100gbps Ethernet speeds. These increasing higher speeds are required within data centre switch-to-switch backbone links to handle complex data transfer. Network designers, however, continue to experience challenges with the MTP technology, especially when migrating to higher speeds, e.g. from 10 to 40 and 100gbps. One of the challenges surrounds polarity and the issues caused when polarity is not correctly managed.

Polarity and polarity options – the basics In fibre systems, each fibre link requires a transmitter at one end and a receiver at the other end, and it is important that when both ends are connected, the fibre connectivity has the correct polarity (transmit to receive and vice-a-versa). When MTP connectors are used in a fibre link, the TIA 568 standard has defined three methods – also referred to as

cabling, enclosures, cabinets & racks

Figure 1

polarity types A, B and C – which handle the transition and polarity of the fibres from the transmitter to the receiver differently. Understanding these three different configurations requires some insight into the MTP connector itself. Each connector has a key on one side which the industry refers to as the ‘key-up’ connector position when the key is on the top. Respectively, when the key faces down, this is referred to as the ‘key-down’ connector position. In the ‘key-up’ position, each of the fibres in the connector is numbered from left to right with P1, P2 to P12.

“Network designers continue to experience challenges with the MTP technology, especially when migrating to higher speeds.”

As shown in figure one, polarity A (type A) defines a fibre cable using an MTP connector in ‘key-up’ position on one side and an MTP connector in ‘key-down’ position on the other side. The fibres will arrive in the same positions (P1 will arrive in P1 etc). Polarity B (type B) defines a straight cable but using ‘key-up’ to ‘key-up’ MTP connectors on each side. The fibres will meet in the reverse order with position P1 arriving in position 12, and P2 arriving in P11 etc. Polarity C (type C) defines a cable using ‘key-up’ to ‘key-down’ MTP connectors. The fibres here are flipped by pair.

Knowing what configuration to go for For duplex applications, such as 10gbps links, the easiest and standard-compliant way to design a duplex link using MTP arrays, is to use a C polarity fibre link in combination with MTP/LC modules. This enables a simple configuration utilising the correct polarity and avoids the need of crossing fibres on the patch cords to maintain the correct polarity, as the cable itself ensures the flip per pair.

Parallel optic applications, e.g. 40gbps or 100gbps links require eight fibres (four fibres to transmit and four fibres to receive). For these applications, MTP array links are used. In the same way, the transmit must go to the receiver but with eight fibres for those applications. To connect directly to a switch using the appropriate polarity for MTP links, a polarity B patch cord is required. Using a B polarity component, the link will always be correct as a polarity B configuration maintains the right positioning of each strand. Links however, are not composed of only one cord. In many cases, structured cabling is deployed and then the links are composed of two patch cords and one trunk fixed in panels. If all components are using polarity B, everything will work fine like for a duplex configuration. Some designs including A or C polarity require advance knowledge, as those configuration options make polarity maintenance more complex. In those cases, different MTP patch cords are needed at both ends and then more product reference must be maintained. With B polarity, only one reference is required making it simple and straightforward. Siemon, April 2018 | 31


The compliance conundrum With the GDPR deadline next month, Mark Baker, field product manager at Canonical, the open source software company, discusses why it’s okay for businesses to trust public cloud services with their compliance needs.


ith the Global Data Protection Regulation (GDPR) on the horizon, businesses that wish to operate in the European Union are having to spend more time than ever thinking about compliance. Not only does all personally identifiable customer data need to be accounted for – a task that is easier said than done for many organisations – internal processes also have to be updated and employees need to be educated to ensure the compliance deadline of May 25 2018 is met. Of course, GDPR is just one legislative challenge facing

32 | April 2018

businesses. Financial services firms, for example, have a revamped version of the Markets in Financial Instruments Directive (also known as MiFID II) to respond to, while the UK telco industry is facing the prospect of new legislations being enforced after Brexit. And as falling foul of industry regulations has the potential to result in massive financial penalties, as well as the threats of reputational damage and a loss of customers, organisations simply can’t afford to be complacent. However, fear of the complexity of managing compliance in new infrastructure, alongside the effort already involved in ensuring existing

systems are ready to go, is prompting many businesses to shy away from cloud, despite the many benefits such services offer. Concerns are primarily due to a misconception that cloud platforms, with data held by third parties on shared systems, will be a more difficult undertaking than traditional in-house systems and potentially less secure; but, the truth is very different. Public cloud services can be extremely secure and can often be a more secure option than inhouse systems. So, what exactly is behind this misconception and why should businesses be trusting public cloud services with their compliance needs?


Privacy please On the face of things, it’s easy to see why many people would assume on-premise infrastructure is more secure and easy to manage. In theory, businesses know exactly where their data is being stored and who has access to it, both of which provide comfort for organisations. They can also design the architecture to suit their own specific needs and preferences, as well as reducing the risk of data loss if a public cloud provider goes out of business. One could argue that such a setup would be particularly appealing to businesses operating in highly regulated industries, such as healthcare and financial services, which need to have greater visibility and control over how their data is managed. However, firms would be wise to remember that operating their own private cloud, places the responsibility of security and compliance squarely on their shoulders. Businesses are at the mercy of the whims of nature and the resilience of their local power grid, potentially leaving them helpless if something goes wrong. It also leaves them vulnerable to disgruntled employees and internal data theft. Employees may have easy access to confidential data, sometimes with very little to stop them from stealing corporate information, simply by pulling a disk from a server and leaving the building with it. Often employees can also connect USB drives which have been used in home systems and may contain malware or viruses. Huge faith is placed in the firewall as an effective means of keeping intruders out, yet backdoors may well exist in the form of legacy and unsecured modem connections, as well as poor access control processes that leave user credentials in place long after the relevant employee has left the company.

So just because infrastructure is in your data centre doesn’t mean it is inherently more secure, resilient or suitable to meet the needs of regulatory compliance than public cloud.

“Firms would be wise to remember that operating their own private cloud, places the responsibility of security and compliance squarely on their shoulders.”

Going public While some businesses may feel more comfortable knowing their data is being stored within their own walls, data location is only one small aspect of security and compliance. Along with the provision of innovative new services to enable business growth, it is the job of public cloud providers to protect their customer’s data. A central component of their value proposition, therefore, is the delivery of systems, tools and continuity plans that make their cloud infrastructure safe and secure. This applies to both virtual and physical means of protection. Corporate data will be stored in a secure facility with multiple layers of physical security that are often not present if businesses opt to run their cloud infrastructure in-house. And, with competition in the market continuing to increase at a rapid rate, ensuring compliance is not only a valuable competitive advantage for those businesses offering public cloud services, but also essential to gaining customer trust and in turn, loyalty. In this respect, smart cloud providers such as City Cloud are leading the way with a value proposition focused very much around regulatory compliance. Public cloud providers are also likely to carry out software patching on a more regular basis, which is essential to manage compliance. Businesses running their own private clouds will generally be slower to patch security gaps, leaving themselves exposed to potential data

breaches and compliance holes. The recent Spectre and Meltdown vulnerabilities are a great example of this, with Google, Microsoft and Amazon all patching their system quickly after the problems became public. Meanwhile many businesses will still be trying to determine what systems they need to patch and how they go about doing it. Furthermore, public cloud providers tend to have highly skilled and experienced IT teams, which isn’t something that can be said for all businesses. The skills gap issue is an extremely prevalent one in the cloud world, and businesses are finding it harder than ever to attract talented developers. This is causing problems when it comes to addressing the more technical compliance challenges, which could be solved using thirdparty infrastructure. Add in the fact that businesses will not be alone when defending against attacks, and the skills argument provides compelling support for the merits of using third-party providers to ensure legislative compliance. The combination of these factors means that in many cases, public cloud can actually be a better option than a private cloud, for systems with high security and compliance requirements. It can certainly be a less complicated option for businesses and help to give them peace of mind amidst shifting regulatory landscapes. As end users become far more sensitive to security of their personal data and initiatives like Open Banking come into effect, the challenges are only going to grow. That’s why organisations today, rather than shying away from public infrastructure, should be embracing them as part of a hybrid cloud offering on their journey to compliance. Canonical, April 2018 | 33

data growth

Drowning in data? Is your business being swallowed by data? Jeff Fried, director, product management, data platforms at InterSystems explains how to turn your data from challenge to opportunity.


raditional database architectures are struggling to cope in today’s data-driven world. Data volumes are escalating and data coming into organisations often lacks a clearly defined structure. Some is event-driven from operational systems, and some transactional from back office systems. Most businesses also have large volumes of historical or

34 | April 2018

reference data, often stored in silos and held away from current or newly-ingested stores. Much is also retained across multiple different applications - from marketing mailing lists to accounting payroll programmes. Sometimes data governance can be restrictive and so it is difficult for organisations to get a comprehensive picture of what they hold.

Coupled with all this, system configurations are becoming increasingly complex, as organisations try to shoehorn traditional architectures into these dynamic and demanding situations. The resulting cost and complexity is an inhibitor to businesses making good use of their data. This makes it imperative that organisations deploy the right kinds of data platform architectures going forward.

data growth

Surveys highlight data challenges As part of a recent research study commissioned by InterSystems, analyst firm Enterprise Strategy Group (ESG) surveyed more than 350 IT and business professionals across enterprise and mid-market organisations - familiar with their organisation’s current database environment - and asked them about the top challenges they face with their current database deployments and infrastructure. The top two most referenced challenges were managing data growth and database size (48%), and meeting database performance requirements (35%).

Adding to the complexity is the sheer number of individual databases in place within many organisations. In fact, 38% of respondents of the ESG research reported that they had between 25 and 100 unique database instances, while another 20% had over 100. Businesses often also have databases of widely different ages. It is not unusual, for example, for them to have flat files that are 20 years old; relational databases that are 15 years old and document databases created yesterday, within the same environment. Therefore, it’s not surprising that organisations may be struggling to access and process data in real-time to drive strategic business decisions. More than 75% of executives responding to a recent survey conducted by IDC, in partnership with InterSystems, agree that untimely data has inhibited business opportunities. This lag is also slowing the pace of business, the survey finds, with 54% stating it limits operational efficiency, and 27% saying it has negatively affected productivity and agility. Of course, all these challenges come at a time when businesses are digitalising their operations, wanting to transition to the cloud and to harness the latest innovations, including business analytics, machine learning and artificial intelligence (AI).

“For businesses to make good use of their data it Scoping out a solution how can organisations address is imperative So these issues? The challenge that they typically starts with data ingest. deploy the Organisations need to find data platforms that can ingest right kinds of data from real-time activity, data platform transactional activity and from document databases. architectures.” Platforms must also be able to take on data of different types; from different environments and of different ages to normalise and

make sense of it. Interoperability is key here. Any chosen solution needs to be able to ‘touch’ those disparate databases and silos, bring information back and make sense of it in real-time. Data platforms must also be agile. Organisations need to store data where it is needed and make certain it remains accessible. Businesses must always ensure they can separate out the data they want from any application from the data they don’t need. It is also the case that as businesses move systems and applications into the cloud, they are starting to use software to ‘containerise’ their applications and modules. Once containers have been set up in the cloud, they are then reusable by other applications within the suite. It is an increasingly straightforward process, but to deliver true business advantage, there must also be a focus on implementation speed. Data platforms must deliver high-quality business analytics to drive competitive edge. They need to integrate business intelligence, predictive analytics, distributed big data processing, real-time analytics and machine learning. They must analyse real-time and batch data simultaneously at scale, allowing developers to embed analytic processing into business processes and transactional applications, enabling programmatic decisions based on real-time analysis. The challenges around data growth are clear. In addressing these, organisations should look for unified data platforms that can integrate different types of data from multiple sources with realtime and batch analytics, capable of driving enhanced business insight and decision-making. In the new data-driven age, we are seeing the dawning of a new era of the agile, flexible data platform. Intersystems, April 2018 | 35

serverless computing

Serverless solution John Jainschigg, technical content manager at Opsview – an IT monitoring company – discusses the new paradigm that is ‘serverless computing’, outlining the pros, cons and what actually works when it comes to monitoring these systems.


or most organisations that use cloud computing today, the base shippable unit of code (as Keith Horwood of StdLib astutely calls it) is the application. Cloud applications are now being gradually modernised on several fronts. Asynchronous frameworks like Node.js provide proven structural armatures and promote code re-use. Infrastructure-as-code discipline, continuous integration, and deployment automation accelerate releases. Containerisation enables dependency isolation, component immutability, workload compactness, infrastructure streamlining, and fast starts; while container orchestration provides high availability, autoscaling, infrastructure federation and other benefits. Application-centric cloud computing, however, suffers a potential downside – persistence. Apps instanced on fleets of VMs or containers consume resources directly, even when idle; or indirectly when IaaS, PaaS, or container orchestration engines run in reserve to enable ondemand scaling. You can manage these costs by arbitraging different tiers of public cloud service and periodically rebalancing the ratio of public to on-premises cloud infrastructure. Many organisations do both on a regular basis. Recently, however, a new paradigm – serverless computing – has emerged with the potential to enable new kinds of optimisation, as well as introducing some

36 | April 2018

intriguing workflow advantages for a narrow, but valuable set of use cases. Along with the potential benefits, however, come challenges to monitoring and observability.

What is serverless computing? In serverless computing – also called Functions as a Service (FaaS) – the base usable unit of code is the function: a piece of code written against a standard runtime and executed by the serverless framework in response to events. The framework handles registration of functions, namespace management, and supports web gateways, APIs and connectors, letting various entities authenticate to and call your functions. It also lets functions talk to databases and files, access remote resources, write to logs, and perform other tasks. Typical jobs for serverless functions include on-demand requests like streaming or real-time data processing, fielding signals from IoT devices, image processing, OCR, and similar event-driven tasks. In their most basic modes of operation, FaaS systems typically work on an on-demand basis. Typically, the first time a given event is processed, a functionbearing container is cold started: this can take a little time, from one to five seconds, usually. Once the event has been processed, the function can remain ‘hot’ as long as events continue arriving for it, enabling real-time processing. The

function runtime can be scheduled to automatically shut down when eventually left idle for a period of time (e.g. 15 minutes). When called upon, it is readily started again. This ability to relinquish all resources associated with a function not in use, lets public cloud providers such as Amazon (Lambda), Google (Cloud Functions), and IBM (who also call their FaaS service Cloud Functions) rationalise generous free tiers of service for their offerings. For them, serverless computing is a multi-faceted opportunity: to attract a class of customer looking to produce business value quickly while leapfrogging the relatively high operational complexity and know-how requirements of IaaS, PaaS, and container orchestration; and to encourage use of serverless computing as a glue promoting consumption of additional add-on services (e.g. database, streams processing, etc). Meanwhile, premises-based FaaS systems such as OpenWhisk (the basis for IBM Cloud Functions on BlueMix), OpenFaaS (native, open FaaS for Docker and Kubernetes), and Platform9’s Fission, among others, offer an alternative to public cloud lock-in, and seem ideal to coexist with more conventional approaches to workload hosting on IaaS-based containers-as-a-service (CaaS) and stand-alone container orchestration platforms in private data centres. For organisations in financial services, healthcare, and other industries likely to find

serverless computing

use for highly-efficient, scalable, transactional/batch processing and already committed to building and maintaining large on-premises cloud estates, implementing FaaS in-house may offer cost benefits, as well as advantages in security, data sovereignty/governance, and compliance.

What makes monitoring serverless so difficult? Much has recently been written about the challenges of monitoring serverless computing. This is, of course, difficult to do directly, in real-time, using polling methods, since functions are typically active only for very short periods of time – too short to register reliably at reasonable polling rates. There are potentially numerous other ways to observe and trace what serverless systems and functions are doing. Functions can be written to make use of logs or instrumented with APM telemetry. Making use of monitoring and metric dashboards and visualisation tools built into public and private-cloud FaaS frameworks are other viable options, depending on the use case. On the other hand, it’s arguable that much of the information thus derived, while undoubtedly interesting, isn’t very useful. Since functions are simple, their failure modes tend to be so as well. Often, simple testing and/or enterprise IT monitoring can help you figure out what’s going wrong with serverless (even, sometimes, on public cloud-based serverless platforms where you can’t monitor the infrastructure directly). From the perspective of IT monitoring, then, what can we know about a serverless platform and its workloads, particularly on private cloud implementations where the infrastructure is observable? At Opsview, our innovation team has been exploring a host of emergent

FaaS solutions for private clouds, all based on standard Docker container tech, with Kubernetes as an underlying orchestrator: an elegant and functional combination. Here are some simple tips that seem to be working pretty well for us, so far: Monitor Kubernetes and the underlying physical or virtual machines Due to coldstarts, FaaS systems typically lag and make precipitous demands on underlying resources in response to bursts of events (i.e. when a lot of function containers need to start). However, they tend to manage gradual traffic ramps and sustained high levels of demand more gracefully (up to available capacity). As the number of different functions running on your platform will be limited, and because all functions of a given type are self-similar, you can, by careful observation of Kubernetes performance data (in our case, returned by an Opspack for Kubernetes, soon to be released from beta) and underlying VM performance stats (SNMP), see how load shapes perform, and infer when, for example, it makes sense to join a new Kubernetes worker to the cluster.

“For public cloud providers, serverless computing is a multifaceted opportunity.”

Complicating the picture, somewhat: depending on architecture and configuration, some FaaS platforms can be configured to coldstart containers in anticipation of event arrival, and/ or maintain a certain proportion of containers on hot standby based on recent traffic, enabling faster and more-predictable response times. Ask Kubernetes for help The Kubernetes dashboard, easily run on your cluster, provides at-a-glance health info for nodes, replicasets, support services, and the containers providing your FaaS functionality. Likewise, most open source FaaS platforms provide some level of minimalistic but useful monitoring. Monitor the database Databases can be critical to serverless apps, and may be a point of failure since (stateless, immutable) functions can’t cooperate to pool database connections. Note that Node.js and similar constructs can be used to cobble up functional database connection pooling. For more on this topic, check out Opsview’s recent white papers on serverless computing. Opsview,

April 2018 | 37


Remote control Jixing Shen, smart panel technical expert at Schneider Electric, discusses the benefits of remote monitoring and why 24/7 vision over your assets is key to ensuring efficient maintenance and management.


echnological progress may seem like both a blessing and a curse for today’s facility managers. Advancing sensors and the emergence of the Internet of Things (IoT) has optimised certain parts of their roles, while also revealing new challenges and responsibilities. In addition to their established role in keeping their facilities operational, they are now expected to cut costs while growing revenues. This is possible by improving maintenance practices and efficiency, while saving energy across their sites, but they are no small tasks. In the new energy age, standard approaches to energy and maintenance efficiency are untenable. Buildings are becoming smarter, but with more powerhungry connected devices, this increases the demand on their power systems. It is, therefore, clear that there is a need to proactively manage energy use effectively. In facilities where uninterrupted operation is vital, standard reactive approaches to maintenance will inevitably fail. The status quo must not be accepted. Negating highly

38 | April 2018

disruptive interruptions requires prevention – it is always better than cure. It is also the responsibility of building owners and managers to ensure healthy maintenance and management is prioritised. To overcome such challenges, facility mangers must have full visibility over their electrical assets at all times. While it is physically impossible to be in more than one place at a time, remote monitoring offers complete oversight and direct operation of all disciplines in the building. Facility managers should consider the benefits of an integrated approach to energy and asset management; one that leverages the latest technologies to encompass cost effective modular solutions to comprehensive networked systems for complete oversight and control, reducing energy consumption, maintenance costs and facility downtime.

Omniscient energy management As their roles and working practices evolve, facility managers may find they are spending more time offsite, or working between different sites as their responsibilities

increase and the number of sites they control grows. This can mean the maintenance and management of numerous buildings, each with its own occupancy classifications, unique requirements and provisions. While there is no need for a manager to maintain close proximity to their assets, the reform of familiar yet inefficient practices is needed to cut costs and deliver optimal efficiency. Fortunately, today’s managers are beginning to appreciate the benefits of the IoT. Connected building systems and devices, installed on critical assets throughout the facility, ensure constant and secure data flow, whilst keeping the cost of building management systems (BMS) down. This allows energy usage to be tracked down to the second, with fluctuations in consumption accurately traced to individual assets and parts of the facility. Remote monitoring and advanced cloud-based analytics can take this to the next level of efficiency. They permit the facility manager to keep track of usage patterns at any time and any place, from the convenience of a mobile device such as a phone or tablet,


Mobile maintenance

even when they are outside of the facility. With a centralised building management system and control capabilities, managers can adjust environmental settings in an instant with nothing more than a smartphone. Crucial interventions to save power can be made faster, such as monitoring and powering down a facility’s heating system when climactic conditions no longer need it, can result in lower energy costs. This is especially true in commercial buildings, which generate an immense amount of consumption data. This information is collected through various sensors – including power meters, breakers and temperature monitors – and communicated on a real-time basis on an individual building, portfolio, and even urban level. Simple IoT-based solutions such as Schneider Electric’s EcoStruxure Facility Expert, can enhance user experience by making building management systems much more efficient. EcoStruxure enables customers to aggregate usage information load by load and allows the provision of descriptive and predictive insights, for increased automation and the recommendation of human intervention where needed. Ultimately, this can have a largescale impact on operations whereby growing margins are distinguished by building types that offer more value creation and profitability.

“Negating highly disruptive interruptions requires prevention – it is always better than cure.”

The electrical and physical infrastructure of a facility needs constant monitoring and efficient, well-planned maintenance. The failure of a single asset or piece of equipment can shut down operations or an entire facility, potentially costing the owner millions in lost work and repairs. To prevent this, favourable environmental conditions need to be maintained and services performed on the distribution network on a regular basis. Facilities follow a wide range of day-to-day maintenance practices. Traditionally, this involved relying on regular, scheduled checkups or corrective repairs once a breakdown had occurred. Recently, however, more effective ‘predictive’ approaches have emerged. These utilise connected sensors to detect faults, and use environmental data to predict when an asset will likely fail. Corrective action is then organised automatically before, and not after, the damage is done. This prevents asset and facility downtime and can result in considerable savings and efficiencies. Real-time, full visibility over a facility’s electrical assets is fast becoming the only sustainable way to perform maintenance in a modern facility. Yet even in this scenario, there are potential gaps. In small teams of building engineers, or in facilities with multiple, disparate sites, problems can still go unaddressed for periods of time when personnel are offsite or under heavy load. When a timesensitive, potentially critical failure is identified, time is precious.

Facility managers must be able to access all the data they need to resolve an issue, but it is not always immediately accessible if they are outside of the facility network. Remote monitoring software, such as Schneider Electric’s EcoStruxure Facility Expert, eliminates this challenge. Mobile applications ensure that when an alert is recorded, it is sent straight to the devices of the facility manager and their team. If a team member is off-site then they can easily share information, coordinate with their on-site colleagues or even seek further insight from another expert through the app. When the asset management system is integrated with the cloud, it also ensures that all facility data they may need is at their fingertips, ready to be deployed. With a few swipes on their phone, they can quickly see the systems affected, as well as the previous maintenance actions that resolved a similar issue in the past. When maintenance is predictive and integrated, it helps teams resolve issues quickly and reduce downtime. Remote monitoring offers managers a bird’s eye view into their facilities and operations. The latest tools provide complete visibility and easy access to the data they need, enabling them to take a more proactive role in both conserving energy and coordinating critical maintenance that keeps the facility operational, efficient and profitable. While adapting to the IoT has created a fair share of teething problems, it ultimately allows facilities managers and their teams to work to a high standard no matter where they may be. Schneider Electric, April 2018 | 39


It’s Tricky Increased levels of complexity for IT teams is having a significant impact on cybersecurity. Andrew Lintell, regional vice president Northern EMEA at Tufin – which helps over 2000 companies streamline the management of their security policies – discusses some manageable solutions that won’t damage your defences.


omplexity has very much become the norm for today’s businesses. The rapid rise in the adoption of public, private and hybrid cloud platforms, combined with hugely intricate networks consisting of a growing number of network devices and the rules that govern them, means network architectures are constantly evolving. This rate of development presents a huge number of opportunities for businesses,

40 | April 2018

including the ability to offer new, innovative services, work in more efficient ways and achieve greater business agility. However, it is also resulting in significantly increased levels of complexity for IT teams, which makes staying secure a real challenge. Indeed, complexity is now viewed as one of the leading risk factors impacting cybersecurity. According to a recent report from the Ponemon Institute, 83% of respondents believe their organisation is at risk

because of the intricacy of business and IT operations, highlighting just how prevalent the issue has become. And, with nearly threequarters (74%) of respondents citing a need for a new IT security framework to improve their security posture, businesses need to find a way to deal with this complexity and the risks it presents. Ultimately, it comes down to efficiently managing a complex web of solutions, while also keeping cyber defences intact.


Patched up When it comes to maintaining security, one of the biggest issues facing businesses today can be best visualised through a ‘patchwork quilt’ analogy. Not only are networks increasing in size, firms are also being faced with the challenge of figuring out how to patch together several different systems and services from a wide range of vendors, all of which have distinctive features and capabilities. The sheer quantity of tools and services being used across heterogeneous environments – multi-vendor and multi-technology platforms, physical networks and hybrid cloud – means a larger attack surface. As the attack surface grows, gaps can appear where attackers can find their way inside the network. And, without true visibility across the entire architecture, and, a clear view of each piece of technology, it’s difficult to find and close those gaps. The services and applications in these various systems will also likely require different security policies, further adding to the complexity. For example, changing one security policy could have implications elsewhere, and without proper visibility, IT teams aren’t always aware of how one change impacts the entire network. Not only can this have security repercussions, but it can also have a negative impact on business continuity. But it’s not just the technical side of things that businesses should be solely concerned with. The human factor of security also must be addressed.  

People problems It has become clear that the complexity issue is further heightened by the fact that today’s IT security teams are often understaffed and don’t have the required levels of expertise to effectively deal with cyber threats.

When it comes to security, human error and misconfiguration risks are also more prevalent than ever.

“83% believe their organisation is at risk because of the intricacy of business and IT operations.”

The so-called ‘skills gap’ has been a widely discussed topic in cybersecurity; one that is becoming more prevalent as cybercriminals expand their capabilities, and corporate environments become more intricate. As a result, many businesses are lacking skilled information security personnel needed to securely manage their complex networks. Human error and misconfiguration risks are also more prevalent than ever. The likes of security lapses, improper firewall management and vulnerabilities being overlooked are all very real concerns that, due to the complexity of modern networks, can become commonplace.

Embracing automation To address these challenges, businesses need to be able to streamline the management of security policies. By using a centralised policy management tool – that looks across the entire network and automatically flags policy violations – the task for IT teams will be significantly simplified, giving them greater levels of visibility and control. Furthermore, policy-driven automation can be used to ensure a company’s security strategy is consistent across the whole organisation, while also being able to

identify high-risk or redundant rules with a greater degree of accuracy than through manual efforts. This way, businesses can continue to develop their infrastructures and grow their businesses without having to worry about opening themselves up to security risks. From a people point of view, carrying out reviews of existing rules and policies is a tedious and timeconsuming task to do manually, which can easily result in mistakes being made. But, an automated tool can remove the threat of human error. It can also complete this job in a fraction of the time, thereby making IT teams more efficient and freeing them up to perform higher level functions that increase the business’s overall security. Coping with complexity is a very real problem for IT security teams, but it is one that can be overcome. By embracing automation, organisations can be sure that nothing will fall through the cracks and, even when a new piece of software is introduced, the overall system will remain as secure and agile as possible. Businesses addressing the technical complexity and the human factor of corporate networks can continue to grow and add new services, safe in the knowledge that their defences are stronger than ever. Tufin, April 2018 | 41

Projects & Agreements

Epsilon and WIOCC partner to deliver ondemand connectivity between Africa and the rest of the world Epsilon has partnered with Africa’s carriers’ carrier WIOCC to deploy its Infiny by Epsilon on-demand connectivity platform. The partnership provides WIOCC with ondemand connectivity to major global financial and communications hubs, and extends the reach of the Infiny platform into sub-Saharan Africa for the first time. Using Infiny, customers are able to access any of Epsilon’s 90+ PoPs globally, and can gain direct connectivity to world-leading cloud and internet exchange providers. The partnership supports the growing adoption of cloud-based applications and services across the African continent, enabling service providers to quickly and seamlessly establish connectivity between subSaharan Africa, global communications hubs and peering points. WIOCC’s unique, diversityrich, high-redundancy network – which brings together 55,000km of terrestrial fibre in Africa and investments in more than 60,000km of international submarine cable – interconnects over 500 locations across more than 30 African countries. This infrastructure underpins WIOCC’s ability to connect its customers to key financial and commercial centres within Africa and around the world. “Partnering with Epsilon further expands our capabilities, offering our customers more flexibility in accessing global hubs. Epsilon’s Infiny platform makes it simple to connect, grow our presence globally, and improves our customers’ ability to quickly roll out new applications and services in Africa,” says Chris Wood, CEO at WIOCC. Epsilon, WIOCC,

42 | April 2018

Virtual Power Systems joins SAP and Schneider Electric to prove software-defined power optimises energy utilisation Virtual Power Systems, creators of Software-Defined Power, is working with SAP’s multicloud computing team and the SAP Co-Innovation Lab in Silicon Valley to validate the use of VPS’ Intelligent Control of Energy (ICE) platform for optimising power delivery. With VPS ICE, SAP will test the ability to track, monitor and manage power usage within the data centre while automatically reallocating power distribution based on capacity and availability demands. “We enable on-demand power delivery by dynamically allocating capacity to data centre servers, racks and systems, as needed,” says Steve Houck, CEO of Virtual Power Systems. “With ICE, next-generation data centre and cloud providers can increase power capacity and resiliency within existing IT footprints to improve revenues while reducing capital and operating expenses. Enterprise customers also benefit from reduced power infrastructure wait times and costs, empowering them to invest more in IT initiatives that drive business innovation.” Typically, power and cooling costs more than the IT equipment it supports, which pressures data centre and cloud operators to find ways to drive energy efficiencies without compromising system availability or reliability. Virtual Power Systems conquers this challenge by utilising software-defined power to abstract power controls through a layer of software virtualisation. By applying machine learning and data analytics, Virtual Power Systems enables better management of data centre growth while relieving powercapacity constraints. Virtual Power Systems,

UiPath opens first US development centre in Bellevue Washington UiPath, an enterprise Robotic Process Automation (RPA) software company, has announced the opening of a new product development centre in Bellevue, Washington. Param Kahlon, who previously held senior managerial roles at Microsoft (MSFT) and SAP Labs (SAP), has been appointed as chief product officer and will lead the product teams in Bellevue, Bangalore and Bucharest, as they chart the course for better, automation-enabled business processes. Param most recently ran the development teams for Microsoft Dynamics 365 sales and customer service applications. There, he played a pivotal role building the cloud business for Dynamics 365, growing the revenue multiple times over the last few years. Prior to Microsoft, Param held leadership roles for product management at SAP and Siebel/Oracle. He possesses deep domain knowledge of enterprise business applications software in areas such as contact centre, sales, order management, asset management and analytics. “I am passionate about helping enterprises execute core operational processes as seamlessly and as efficiently as possible,” says Param. “Automation presents a huge opportunity for businesses to scale operations and deliver best-in-class results across departments spanning finance and accounting, human resources, customer service, and sales and marketing.” The Bellevue office will focus on adding AI capabilities to the UiPath platform. Because the UiPath platform is built on Microsoft .NET components, the centre’s proximity to the Microsoft headquarters and Microsoft’s development community is a critical advantage – particularly as Microsoft continues to concentrate on AI. UiPath,

Projects & Agreements

Tintri and Commvault automate data protection for customer applications Tintri, a provider of enterprise cloud platforms, has announced its integration with Commvault software to allow joint customers to automate more aspects of their data protection processes for virtual machines (VMs). Through the integration between Tintri storage and Commvault IntelliSnap, a technology solution that makes snapshots more valuable and effective at protecting and recovering data, joint customers gain a protection and recovery solution that reduces backup overhead and simplifies operations. Commvault IntelliSnap technology tightly integrates Tintri snapshots into VM-aware, data-focused protection and recovery operations. By leveraging Tintri per-VM snapshots and

Commvault intelligent policy-based snapshot management, the solution drives highly parallel snapshot and backup operations, while eliminating manual scripting and management processes, shrinking the cost and complexity of data management. “Commvault IntelliSnap technology and Tintri have a strong synergy around the virtual machine,” says Jonathan Howard, director, technical alliances, office of the CTO, Commvault. “With this integration, VM awareness spans from Tintri storage into hardware snapshots through Commvault backup, recovery, and VM lifecycle management, which offers our joint customers more visibility into their end-to-end operations and more control of data protection and recovery.” Tintri,

Cinia partners with Megaport to deliver direct cloud connectivity to the Nordics Megaport, a global Network as a Service (NaaS) provider and Cinia Oy, creator of intelligent connectivity solutions have announced that they have partnered to enable enterprises with direct access to leading Cloud Service Providers (CSPs) and extend Cinia’s network services reach. “The core tenet of Megaport is making connectivity easy,” says Vincent English, chief executive officer, Megaport. “We overcome barriers to entry for cloud adoption by making it easy to get connected directly to

cloud, managed, and network service providers. This enables scalability, consistent performance, and bandwidth optimisation that helps reduce total cost of ownership.” “Our software defined network empowers enterprises to architect multicloud and hybrid cloud connectivity. For maximum ease of use, customers can order services in real-time via our Portal or through our API. Our partnership with Cinia enables Finnish enterprises with direct cloud access via Cinia’s robust

national and international network. Cinia is an agile industry-leading service provider and we are proud to partner with them to elevate connectivity options in Finland.” “We partnered with Megaport because connectivity is a key factor when accessing public cloud services and their solution, combined with our footprint in the Nordics, addresses a clear demand,” comments Taneli Vuorinen, SVP, Cinia. Megaport,

April 2018 | 43

Projects & Agreements

WANdisco gains co-sell status through Microsoft One Commercial Partner Programme WANdisco, the live data company, has announced that it has achieved co-sell status through the Microsoft One Commercial Partner Programme. The new status was achieved after meeting rigorous criteria to become co-sell ready including integration of WANdisco Fusion with Microsoft Azure Databox and Azure HD Insights. As a result, WANdisco can now take the WANdisco Fusion live data platform to market as a packaged offering with Microsoft Azure. WANdisco’s dedicated partner managers will work directly with the Microsoft field sales team on hybrid cloud customer opportunities and related account planning activities. WANdisco Fusion provides continuous replication of selected data at scale between multiple big data and cloud environments. With guaranteed data consistency and continuous availability, Microsoft Azure customers will now have easy access to the cost saving benefits of WANdisco Fusion’s hybrid architecture for on-demand data analytics and offsite disaster recovery. “We are seeing significant demand from customers who want to expand their on premise data lake into Microsoft Azure,” says John ‘JG’ Chirapurath, general manager of data platform marketing, Microsoft Corporation. “WANdisco Fusion enables continuous data replication for Microsoft Azure and HDInsight customers who also want to maintain on premise consistency at scale.” WANdisco,

44 | April 2018

Intelligence Partner brings RingCentral cloud communications solutions to businesses in Spain RingCentral, a provider of enterprise cloud communications and collaboration solutions has announced that Intelligence Partner, a consulting leader in cloud computing solutions, has selected RingCentral as its exclusive cloud communications and collaboration solutions provider. With RingCentral, Intelligence Partner will bring the industry’s leading cloud communications solutions to businesses across Spain. This will enable companies of all sizes to more effectively collaborate and connect with their customers, partners, and employees. The rapid adoption of smartphones in the consumer realm has caused a shift in the way people work. Today’s modern workforce demands solutions that enable them to work from anywhere, anytime, and on any device. This flexibility can only be accomplished with mobility-driven cloud solutions. RingCentral provides mobile-first voice, video conferencing, web meetings, team messaging, and contact centre as a complete cloud solution. In addition, RingCentral’s open platform integrates with the industry’s leading cloud business applications, including Google, Salesforce, and Zendesk, to power greater business efficiencies. Intelligence Partner is a Google Premier Partner, and these product integrations between RingCentral and Google solutions are key to empowering workforces to be more productive. “Through this alliance, Intelligence Partner will further expand its range of complementary solutions to G Suite, helping Spanish businesses take full advantage of what a cloud communications solution can offer,” says Ignacio Bañó, general director of lntelligence Partner. RingCentral,

KKR to invest $172m in Cherwell Software Cherwell Software, an enterprise service management company, has announced that global investment firm KKR will take a larger stake in the company through its next-generation Technology Fund, which focuses on investments in software, security, internet, digital media, and information services. This latest investment of $172m will be in addition to KKR’s initial $50m investment made in Cherwell in February 2017. Cherwell offers software that has enabled thousands of organisations to modernise their business operations by automating services digitally. After establishing itself as a leader in the IT Service Management (ITSM) space, Cherwell is poised to expand into the service management market, which is estimated to top $30bn by 2020. As organisations of all sizes seek to connect disparate digital services and data silos in order to gain more insight, efficiency and productivity, Cherwell’s platform unifies the tools that ensure their businesses are as efficient as possible. Companies, schools, hospitals, and government agencies all over the world depend on Cherwell’s solutions to manage their IT operations and, increasingly, achieve digital transformation across their organisations. Cherwell,

Projects & Agreements

Aircraft expert GDC selects Volta as its data centre partner in Europe GDC Technics, an aircraft modification centre providing maintenance on Boeing and Airbus aircrafts, has announced Volta Data Centres as its data centre partner of choice for Europe. GDC ‘s server infrastructure was previously hosted in Munich, at its German office. Following a streamlining of its engineering, the company wanted to move its IT infrastructure to the UK, with London being a prime location. GDC’s relocation project came accross two challenges: the servers needed to be relocated between different countries and the entire project needed to be completed within 10 days to avoid any disruption

to the business. After contacting nearly 100 colocation providers and carefully comparing them, GDC decided to select Volta Data Centres. With Volta Data Centre’s operations team support, GDC was able to meet its deadlines and complete the entire relocation project within four days, with no downtime affecting its multiple on-going projects. Volta successfully equipped the allocated space and provided the engineering support to meet GDC’s requirements. GDC experienced 100% uptime, leading to a smooth transition and no negative impact on the business.

The aerospace specialist now benefits from a state-of-the-art data centre in central London, allowing it to provide outstanding maintenance on Boeing and Airbus aircrafts. Volta Data Centres,

Cisco and Portugal’s government announce collaboration to accelerate country digitisation The Government of Portugal and Cisco have signed a memorandum of understanding (MoU) to accelerate country digitisation. The signing followed a meeting between Portugal’s Prime Minister António Costa, Cisco chairman and CEO Chuck Robbins, and Sofia Tenreiro, general manager of Cisco Portugal. Over the next two years, Portugal and Cisco will cooperate to capture opportunities presented by a digital economy, with the aim to positively impact economic growth, education, innovation, and competitiveness as well as social inclusion and quality of life. The MoU will focus on several key points that are part of Portugal’s National Reforms Programme. These include

support for entrepreneurship and business innovation, with a specific focus on start-ups; increasing and improving digital skills; the application of innovative digital technologies across public sector services, education, industry 4.0, mobility; and cybersecurity. António Costa, Prime Minister of Portugal says, “By accelerating the national digitisation agenda, Portugal can increase GDP growth, create jobs, and improve digital inclusion for our people and businesses. We strongly welcome Cisco’s contribution to create a sustainable innovation ecosystem that will enable our country to better compete in the global digital economy. Cisco, April 2018 | 45

Projects & Agreements

UK colocation startup IP House deploys Schneider Electric EcoStruxure IT IP House, a London-based data centre startup and new entrant to the UK colocation market, has selected EcoStruxure IT, the next-generation cloudbased DCIM platform, to provide 24/7 monitoring of its ISO-accredited facility. “During the planning stages we chose to utilise software from an industryleading vendor. One that had a reputation for innovation and a focus on continual improvement,” says Vinny Vaghani, operations and commercial manager at IP House.

“One of the biggest drivers for selecting EcoStruxure IT was its vendorneutrality and ability to integrate with different products to provide detailed data in a single dashboard. As a colocation provider we have to adhere to the highest standards of uptime and resiliency, monitoring and management is therefore an absolute necessity for our customers.” IP House’s carrier-neutral data centre has been built to Tier III standards on the edge of London’s financial district. It contains 14,000ft2 of white space

across two technical suites and will be operational later this month. “Our clients depend on both uptime and 24/7 connectivity to business-critical applications hosted within the data centre,” says Sean Hilliar, data centre manager at IP House. “Having the ability to proactively monitor all elements of the infrastructure with an advanced software solution like EcoStruxure IT will reassure customers that we’re providing them with a secure, competitive and resilient colocation service, that safeguards them against downtime.” IP House,

DigiPlex collaborates with NexGen Networks to deliver increased level of connectivity DigiPlex, a Nordic data centre operator, and NexGen Networks, an integrated telecommunications carrier, have announced a collaboration that extends NexGen’s global footprint to DigiPlex’s Stockholm data centre. The agreement will enable DigiPlex customers to connect to locations around the world through NexGen’s carrier network. NexGen customers will also be able to expand their infrastructure into DigiPlex’s award winning, carrier-neutral data centre. “We are delighted to have NexGen on board as we expand our campus facility in Stockholm. In today’s digital world, the data centre is becoming an interconnected business ecosystem for critical digital operations, and DigiPlex customers can now take advantage of the increased level of connectivity services

46 | April 2018

that NexGen brings,” says CEO Gisle M. Eckhoff. “We continue to enhance connectivity in all our data centres as a key part of our client offering.” NexGen Networks is also announcing the availability of NexGen’s TaaS (Time as a Service) NPLTime in DigiPlex’s Stockholm data centre. “NexGen’s Pan-European Cybersecure footprint in partnership with NPL to deliver NPLTime will not only enable financial enterprises in Stockholm, but also DigiPlex’s customers impacted by ESMA’s ruling, to de-risk all compliance ambiguity in technically addressing MiFID II RTS25.” says Janesh Mistry, senior sales director, NexGen Networks EMEA. DigiPlex,

Projects & Agreements

NFLEX joint solution from Fujitsu and NetApp now available The NFLEX Converged Infrastructure solution, developed and sold jointly by Fujitsu and NetApp, is now available to order in the Europe, Middle East and Africa (EMEA). Designed to take the complexity out of implementing and operating virtualised application environments in the data centre, NFLEX provides a simple, ready-to-run infrastructure solution for medium and large enterprises. The NFLEX Converged Infrastructure solution delivers modular system sizing, single-call support from Fujitsu and NetApp, and integrated system management. The complete, pre-configured solution enables organisations to reduce implementation and operational costs and supports business growth by flexibly scaling compute and storage capacity. Thanks to flexible pre-configured expansion packs NFLEX can be extended as workloads increase. At the core of NFLEX are the latest Fujitsu PRIMERGY CX400 M4 servers and NetApp hybrid – and all-flash storage systems. The single-rack compute and storage solution provides enhanced service delivery and performance that can be tuned to business needs. Whether on-premises, in the cloud or in hybrid IT environments, NFLEX boosts data centre performance and productivity by delivering higher-value IT services at reduced costs. Fujitsu, NetApp,

FatPipe Networks and Lepton Global Solutions collaborate to optimise SD-WAN over satellite FatPipe Networks, inventor of software-defined networks for wide area connectivity and hybrid WANs, has announced a collaboration with Lepton Global Solutions to optimise SD-WAN over satellite for VSAT customers. Lepton Global Solutions is a provider of customised, cost effective, end-to-end satellite communications solutions. Lepton recently launched Lepton BOOST, a secure SD-WAN solution designed to optimise satellite connection link traffic, powered by FatPipe Networks. BOOST is designed to facilitate the secure and timely delivery of data to customer locations from VSAT-based endpoints. Integrated into the company’s ground infrastructure, Lepton’s BOOST virtual network utilises five ‘engines’ to provide WAN optimisation: security, efficiency, Quality of Service (QoS), congestion control and virtualised routing. BOOST provides WAN network management specifically designed to address the high-latency, low-bandwidth environment of satellite networking with features including caching, compression, acceleration, VPN IPSEC and global single IP addressing. “Lepton’s FatPipe-powered BOOST SD-WAN solution has the potential to transform satellite connections by introducing the enormous benefits of virtualizing and overlaying various satellite network links,” says Sanch Datta, CTO, FatPipe Networks. “Lepton BOOST customers can now rely on secure and cost effective routing of data and applications between headquarters and remote locations without concern for network performance or data breaches.” FatPipe Networks,

Telefónica creates a suite of big data products for enterprises in partnership with Huawei Telefónica and Huawei have announced a partnership to launch a set of big data products aimed at enterprises that want to develop their own internal use cases as well as to sell data applications to their clients. These Big Data as a Service (BDaaS) products will be marketed under the brand of LUCA, Telefónica’s data unit. Initially, the agreement provides customers four core services: Hortonworks data platform as a service to help store, process and analyse data; Big Data integration to migrate data to the cloud; data governance to control the quality and security of data; and, data miner to help create insights from data through analytical algorithms and complex structured and unstructured data integration. Each of the services can be self-deployed for use by the customer on-demand. They are infrastructure agnostic meaning they can be deployed over a variety of Infrastructureas-a-Service environments. This method of providing services on-demand helps customers avoid long hardware procurement cycles and large investments as well as reduce the time-to-market. This partnership with Huawei will allow Telefónica to widen its portfolio of big data and data analytics related services under the LUCA brand helping customers to accelerate the digitalisation process of their businesses and be more competitive. Telefónica, Huawei,

April 2018 | 47


Geist future-proof data centre power management with next generation rack PDU platform Geist, a division of Vertiv and provider of intelligent power and management solutions for data centres, has announced the release of Geist GU2, its highly anticipated next generation of upgradeable rack PDUs. The Geist GU2 family establishes a robust power infrastructure foundation for mission-critical facilities and introduces an evolutionary design platform that simplifies upgrades to technology advances as business needs evolve. Geist GU2 is available immediately in all regions where Geist products are sold. The new line of Geist intelligent, upgradeable rack PDUs improves the user experience by allowing customers to more easily adopt new technology by hot-swapping modules that update intelligence within previously installed rack PDUs. These updates can occur on the fly and without the need for downtime. This upgrade capability future-proofs critical power distribution infrastructure while simultaneously reducing the risk of product obsolescence or under-performing power components as data centres change. In combination with expanded power monitoring and control features, the highly flexible Geist GU2 family represents the highest value, lowest cost of ownership rack PDU on the market.

Geist GU2 rack PDUs are available in outlet-level monitored, switched and switched with outlet-level monitoring configurations, all of which can be customised. All models offer the following standard features: high-density outlet options, smart latching relays (switchable rack PDUs) which significantly reduce power consumption and increase fault tolerance, fault-tolerant daisy chaining, certified high temperature grade (60°C), wireless options, a slimmer PDU profile, and visible light communication (VLC) technology, which enables the quick scan and tracking of power metrics from a mobile phone or tablet. All Geist rack PDUs provide reliable, rack­mount (zero U) locking receptacle, single or three­phase power distribution options and can be customised for specific requirements and configurations with the industry’s fastest design-todelivery turnaround. Geist,

The SDP0240T023G6 SIDACtor protection thyristor – A class H xDSL line driver protection solution The SDP0240T023G6 SIDACtor Protection Thyristor from Littelfuse, is engineered for tertiary overvoltage protection in applications such as the VDSL2, ADSL2, and ADSL2+ cards commonly used in broadband communications. This robust, solid-state component provides a surge capability of 30A (based on the IEC 61000-4-5 1.2/50– 8/20μs waveform) in a compact, surfacemount SOT23-6 (see Figure 2) package;

Figure 1

48 | April 2018

most components of similar size offer far less surge handling capabilities. Because it is designed to have minimal impact on data signalling, the SDP0240T023G6 will not negatively impact the application’s rate or reach. This component was created using a new silicon that provides a high speed crowbarring response to surges, ensuring lower overshoot voltages and improved ESD, lightning, and power fault protection. This process also ensures the component’s low off-state capacitance will vary less than 1pF over the range from 0V to its standoff voltage without external biasing (see Figure 1) The relatively flat curves shown in Figure 1 demonstrate the SDP0240T023G6’s minimal variation in capacitance as the line voltage approaches the standoff voltage. The red curve indicates the longitudinal

Figure 2

(from each line to the ground reference of the chipset) capacitance values; the blue curve shows the differential (or transverse) values. The SDP0240T023G6 provides a perfectly balanced broadband protection solution because it will not convert a longitudinal surge event into a differential event. This UL recognised component is compatible with DSL bandplans up to and including the 30a bandplan (30 MHz) with turn-on response times of less than 500ns. Littelfuse,


Fast, simple, effective: Rittal Edge Data Centre for innovative IoT solutions Companies that employ machine-tomachine communication to streamline manufacturing require real-time capabilities. IT resources deployed in close geographical proximity ensure that latency is low, and data readily available. The Rittal Edge Data Centre provides an effective answer to this need. It is a turn-key, pre-configured solution based on standardised infrastructure. It can be implemented rapidly and cost efficiently – paving the way for Industry 4.0 applications. The sensors and actuators deployed in smart production systems continuously relay information on the status of processes and infrastructure. This forms the basis for innovative services – such as alerts, predictive maintenance, and machine selfoptimisation – delivered by the company’s IT department in real time. To

make this possible, and to rapidly respond to events and anomalies, low latency between production and IT infrastructure is critical.

Fast, simple, effective A remote cloud data centre is unable to support these scenarios. The solution is edge computing, i.e. computing resources at the perimeter of a given network. With this in mind, Rittal has introduced a new edge data centre: an end-to-end product with standardised, preconfigured IT infrastructure. The Rittal Edge Data Centre comprises two Rittal TS IT racks, plus corresponding modules for climate control, power distribution, UPS, fire suppression, monitoring and secure access. These units are available in various output classes, and can be easily combined for rapid deployment.

Moreover, to safeguard critical components from heat, dust and dirt in industrial environments, the Rittal Edge Data Centre can be implemented in a self-contained high-availability room. The Rittal Edge Data Centre can be extended two racks at a time. Moreover, the modular approach provides customers with diverse options, allowing it to accommodate a variety of scenarios – for example, installation in an IT security room, or in a container, to be located wherever it is required. Rittal,

Centiel increases power density of its 4th generation modular UPS system: CumulusPower Centiel Ltd has launched new 25kW and 60kW UPS modules for its pioneering 4th generation modular UPS system, CumulusPower. This three-phase, modular system offers 99.9999999% (‘9 nines’) system availability with low total cost of ownership and now 20% more power density. These new modules complete the family which also includes: 10kW, 20kW and 50kW options. “Real estate is expensive and so data centres need to save space where they can,” says Mike Elms, sales and marketing director, Centiel Ltd. “In addition, more space and racks in a facility can in turn, potentially generate revenue for the operation. Therefore, a UPS which provides the most power, using the smallest footprint possible is a valuable asset. “The new 25kW and 60kW versions of CumulusPower have all the benefits

of our existing industry leading modular UPS but now offer yet more power in the same footprint. Unlike traditional multi-module systems, the CumulusPower technology combines a unique Intelligent Module Technology (IMT), with a fault-tolerant parallel Distributed Active Redundant Architecture (DARA), to remove single points of failure and offer industry leading availability.

“9 nines system availability is achieved through fully independent and self-isolating intelligent UPS modules each with individual rectifiers, inverters, static bypass, CPU and communications logic and display,” explains Mike. “In the unlikely event of a module failure, the module can be quickly and safely ‘hotswapped’ without transferring the load to bypass and raw mains. “In addition, CumulusPower has been designed to reduce the total cost of ownership through low losses,” Mike confirms. “The high double conversion efficiency of 97% at the module level means it is currently the best solution available to protect data centre infrastructure as its configuration also reduces downtime risk, avoiding costly errors as well as increasing energy efficiency.” Centiel,

April 2018 | 49

final thought

Hybrid Hype As IT requirements evolve, businesses require an infrastructure that promotes agility, security, speed and flexibility. Guy England, director of Lenovo DCG, data centre solutions provider, explains why this need has given rise to the hybrid enterprise.


echnology is the enabler of the digital future, and being fluid, agile and capable of evolving in real-time, as required, is where businesses can differentiate themselves from the competition. Additionally, it is where businesses can deliver brilliant customer service in a constantly changing, fast paced business environment.

50 | April 2018

More organisations are committing to a mix of efficiency, performance and cloud-readiness. Driving this is the requirement for some workloads to remain on-premise. Therefore, moving everything to the cloud is not a realistic option for some who, for example, needs to run their workloads on-premise for regulatory or cost reasons. Hybrid is the answer.

The hybrid key drivers Businesses want to be able to respond to evolving customer requirements quickly, meet the increasing demand of hyperscale and be able to operate faster on the ‘edge’. This desire means their infrastructure must also evolve to support these ambitions and provide the agility and speed of the modern world.

final thought

data centres – both of which are crucial to handle the demands of the connected world we live in. Essentially, the greater agility you have, the more possibilities you will have at your disposal.

and enhance their existing systems in the cloud. Essentially, this means they don’t have to scrap their entire infrastructure to take advantage of the benefits associated with cloud computing. Businesses want to be able to operate cost effectively, without limitations, and a hybrid IT environment is a powerful enabler.

Boosted data privacy

Combining the best of both worlds

This is why we see many turning to a hybrid IT model. Operating a hybrid IT environment also means businesses don’t have to pile all their eggs into one cloud basket. The cloud is not an all or nothing proposition, rather businesses are able to maintain an ‘open’ approach, without migrating their entire portfolio to the cloud, and without vendor lock-in concerns. Going hybrid also gives each business the opportunity to revolutionise their current IT set-up

A hybrid IT environment offers the best of the private and public cloud worlds, helping tackle reservations that many businesses have about moving their entire IT infrastructure to the cloud. Perhaps the most significant benefit is that innovative technologies can be integrated as and when required, helping upscale business and IT operations. The correct workloads can be chosen and moved depending on the businesses needs and new technologies. The ability to scale workloads on demand helps handle the massive amounts of data being created today. Even more importantly, it means emerging technologies, such as AI and ML, are more accessible. This becomes even more important in today’s quickly changing landscape, marked by a number of emerging technologies and fluctuating digital requirements. Not only will a hybrid approach enable greater flexibility, but it will also result in better total cost of ownership (TCO), agility, security and customer experience. All of which are vital aspects all modern-day businesses are striving to achieve. Given the associated benefits, it’s not surprising that by 2020, 90% of organisations are expected to adopt hybrid infrastructure management. A hybrid infrastructure helps businesses move towards digital success and opens the doors to both edge computing and hyperscale

“Operating a hybrid IT environment means businesses don’t have to pile all their eggs into one cloud basket.”

A hybrid infrastructure can enhance compliance as businesses have the flexibility to choose where to host and process their data. This becomes particularly poignant with new data legislation such as General Data Protection Regulation (GDPR) which is to be enforced from May 2018. Additionally, a hybrid infrastructure means critical, business-sensitive data can still be stored on-premise to improve governance, while any data that is less sensitive can be sent and stored in the cloud. However, data compliance must start with sound data governance and policy. Technology plays a part by providing some elements in support of achieving this regulatory compliance.

Ready for the evolving world Given the performance and governance benefits of a hybrid IT environment, it’s not surprising that European IT decision-makers recently voted their top investment priority as readying their data centre infrastructure for the emergence of hybrid cloud deployments. Considering the rapid evolution of cloud technology, one thing modern-day businesses can be sure of is continued change – making it all the more important for businesses to operate with greater flexibility and agility to meet the need the demands of both the market and its customers. Lenovo, April 2018 | 51

Data Centre News is a new digital news based title for data centre managers and IT professionals. In this rapidly evolving sector it’s vital that data centre professionals keep on top of the latest news, trends and solutions – from cooling to cloud computing, security to storage, DCN covers every aspect of the modern data centre. The next issue will include a special feature examining design and facilities management in the data centre environment. The issue will also feature the latest news stories from around the world plus high profile case studies and comment from industry experts. REGISTER NOW to have your free edition delivered straight to your inbox each month or read the latest edition online now at‌

DCN April 2018  
DCN April 2018