Page 1


data centre news

December 2017

Roars when you need it, sleeps when you don’t.

inside... Special Feature UPS

Meet Me Room

Eric Schwartz of Equinix on current industry pressures, challenges and changes.

Final Thought A look ahead to industry trends for 2018.

Powerful Distinctive Unique Geist locally manufactures reliable intelligent PDUs, that arrive when you need them and are 100% tested before leaving our facility. See how we do it


in this issue…

data centre news

December 2017

Roars when you need it, sleeps when you don’t.

December 2017 4 Welcome

30 Disaster Recovery

Don’t look back in anger.

Are availability zones a feasible disaster recovery solution? Richard Stinton of iland give us his insight.

6 Industry News


Meet Me Room

Special Feature UPS

Eric Schwartz of Equinix on current industry pressures, challenges and changes.

Final Thought A look ahead to industry trends for 2018.

Why do only one in 10 businesses see the UK as a digital leader?

32 DC Future

14 Centre of Attention

David Trossell of Bridgeworks discusses what is needed to achieve the data centres of the future.


34 Projects and Agreements

18 Michael Brooks of MPower

When it comes to storage, Gary Quinn of FalconStor, gives us his insight into bridging the generation gap.

16 Meet Me Room Eric Schwartz of Equinix discusses how far the industry has come and how great it’d be to fully function without sleep.

26 Cloud To buy or to build? That is the question. Mark Baker of Canonical is here to unmuddy the waters.

28 DCIM Nick Claxson of Comtec Enterprises explains why DCIM will always maintain its value.

Find out how Snowflake is helping empower data scientists.

40 Company Showcase MPower UPS and FEC Heliports Equipment work together to promote helipad safety.

42 Final Thought Giordano Albertazzi of Vertiv looks ahead to 2018 industry trends.


UPS walks us through the benefits of a modular UPS system and the steps you can take to ensure your system’s availability and reliability.

20 Dr. Alex Mardapittas of

Powerstar discusses the importance of data storage and UPS to your data centre.

23 Giovanni Zanei of Vertiv

discusses whether lithium-ion batteries are a cost effective way of supporting your UPS.

December 2017 | 3

data centre news

Editor Claire Fletcher

Group Advertisement Manager Image courtesy of Blue Coat Photos.

Kelly Byne – 01634 673163

Studio Manager Ben Bristow – 01634 673163

Designer Jon Appleton

Business Support Administrator Carol Gylby – 01634 673163

Managing Director David Kitchener – 01634 673163

Accounts 01634 673163

Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173

The editor and publishers do not necessarily agree with the views expressed by contributors nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2017. All rights reserved.

4 | December 2017


t would seem that these days, security breaches are just part of the landscape and 2017 proved to be a less than successful year for various major companies when it came to high profile hacks. But why is this? Do organisations just not know how to protect themselves? Or is it down to the IT skills gap we’ve been hearing so much about? Do they even care? The general attitude seems to be, ‘it won’t happen to me.’ If the events of 2017 have told us anything, it’s that no one is safe. But it’s not looking to get any easier. A major trend I’ve seen for 2018 is set to be the increasing shift to mixed cloud environments, which despite its advantages, poses its own problems. Securing your data becomes infinitely more difficult when you have to duplicate efforts across multiple cloud infrastructures, putting a major dent in the control you have over your environment. And remember that skills gap? It was hard enough finding skilled

Claire Fletcher, editor

cloud developers in the first place, those guys are like gold dust. Trying to find developers who have a vast knowledge of multiple clouds? Even harder. And I haven’t even gotten to the biggest change set to rock the boat in 2018: GDPR. These new rules apply to any company, anywhere, that collects sensitive data about European customers or employees and incurs ludicrously high penalties for those failing to comply. Once it kicks in, data centre operators may become prime targets for regulators’ enforcement efforts. But I’ve saved panicking about that for the January issue, it is Christmas after all. That said, on behalf of everyone at DCN, I’d like to thank all that have contributed and helped make this year a success and I wish you all a smashing Christmas and a very merry new year. Should you have any questions or opinions on the topics discussed please write to: Claire.fletcher@


industry news

BMC survey: New management approaches required for multi-cloud environments Rapid adoption of multiple public and private cloud services and infrastructure has created complexity that is amplifying traditional pressures placed on digital businesses, according to new research from BMC. As a result, organisations are realising that current strategies and methods of managing multi-cloud environments need re-thinking, and are turning to artificial intelligence (AI) as an emerging solution. BMC announced the survey of more than 1,000 IT decision-makers in 11 countries, conducted by research firm The survey examines how businesses are investing in multiple public cloud solutions from several vendors to optimise costs, stay agile, and mitigate risks. These distributed, multi-cloud environments are also creating a broader attack surface and potentially increasing costs. According to the survey, 40% of IT leaders do not know how much their businesses are spending on cloud services in total, rising to 53% of IT leaders in the UK. Recognising the complexity of managing multiple cloud environments, 80% of global respondents, agree that new approaches and tools are required. Among the new approaches being considered is AI, with 78% of IT decision-makers indicating that their companies are looking for ways to apply artificial intelligence as part of their multi-cloud management strategies. However, the UK appears to be behind the curve; just 64% of IT decision-makers are considering AI, which puts the UK behind all 10 other countries surveyed by BMC. BMC,

6 | December 2017

Telecoms and banking sectors most advanced in digital transformation Ovum’s latest ICT Enterprise Insights programme has found that across 14 industries, the telecoms and banking sectors are the most mature in the drive for digital transformation. The programme was based on primary interviews with over 6,300 enterprises globally, asking senior ICT executives to rate their organisations against nine steps identified by Ovum as key to digital transformation. While the telecoms sector was most advanced, the overall level of digital maturity is low, with its index score currently at 43.9%, with banking next at 42%; many providers in both sectors are still at early stages or mid-stream levels across most of the steps. Indeed, across the 6,300+ enterprises surveyed, only 8% consider themselves to have achieved transformation, and only a fraction over 16% believe they are welladvanced. Almost as many (23%) rate themselves to be still at the early stages. The ICT Enterprise Insights programme also examined the role of enabling digital technologies in path to digital transformation, including Big Data, blockchain, IoT, platform architecture/APIs, artificial intelligence (AI), and microservices. Here, enterprises are most advanced in adoption of Big Data and API-based architectures with close to 40% of enterprises actively trialling or deploying these areas, with just under a further 50% planning or considering to do so in the future. Microservices and IoT are less currently advanced in terms of active deployment, but over 50% are planning or considering to do so in the future. In contrast, while AI and blockchain are much discussed, actual traction by enterprises to use such technologies for digital transformation is much less developed. Ovum,

industry news

Chancellor expected to inject £75m into AI, as only one in 10 business leaders see UK as a digital leader Only one in ten business leaders see the UK as a leader in digital, according to the latest survey by Deloitte. Although 85% plan to invest in AI and the Internet of Things (IoT), only 22% of businesses have so far. The chancellor’s new budget is expected to allocate £75 million into AI, and £76 million into digital and construction skills. Encouraging investment in technologies like AI and the IoT is massively important to UK

productivity. But to make the most of the hype, AI and other new technologies need to be implemented with clear and specific business objectives in mind, says World Wide Technology. Ben Boswell, VP Europe of World Wide Technology commented, “AI and IoT technologies have the capacity to offer real productivity improvements to businesses. We’ve recently seen a manufacturer in the US on track to achieve $1 billion in savings by the end of

the year using predictive analytics, which rely on these capabilities. “The UK stands to benefit too, if it can get the investment into new innovations right. With new challenges to global competitiveness following Brexit, this kind of advantage could be crucial. Spend on AI and digital technologies is currently only a very small percentage of overall IT budgets. Digital technologies hold massive potential, but they need to be planned properly.” Deloitte,

Tintri research reveals guesswork is behind a quarter of all storage capacity planning decisions According to research conducted by Tintri at IPExpo in London in October 2017, nearly a quarter (24%) of storage capacity planning decisions are made by IT leaders on the basis of a ‘best guess’ or no planning at all. In addition, three quarters of the respondents in the study operate with up to 20% of their storage capacity unused, keeping it in place as a buffer to maintain predictable performance. When planning storage capacity needs, 42% of the respondents rely on previous experience, but only 12% run a simulation project to help accurately guide decisions. Just under 10% of the respondents carry out no planning at all for future needs, and only one fifth (22%) rely on input from technology vendors or partners.

The research also revealed that accurate storage capacity planning is a challenge for most buyers, with 70% of the respondents reporting that they have underestimated (43%) or overestimated (27%) their needs. In addition, 77% of the respondents reported that up to a fifth of their storage capacity is unused, except as acting as a buffer to maintain performance. At the extremes, 37% revealed that more than a fifth of their storage capacity is unused, with only 10% able to report that they operate with 5% unused storage capacity or less. Scott Buchanan, chief marketing officer at Tintri commented, “Guesswork leads to performance issues or overprovisioning and with the analytics tools available today, it’s simply unnecessary.” Tintri, December 2017 | 7

industry news

London remains first choice for data centres in Europe despite Brexit headwinds According to the Data Centre Cost Index 2017 from Turner & Townsend, demand for data centres in London is outstripping supply from contractors and has driven up construction costs by 4.1%, despite ranking as the third most expensive city to build data centres in the world after New York and San Francisco.

The Index analyses input costs – including labour and materials – and compares the cost of both new build and technical fit out projects across 18 established data centre markets worldwide. Foreign investment and the growth of cloud services providers are key factors fuelling the growth of data centres in

London. In the rival city of Frankfurt, the construction market for data centres is classified as ‘overheating’ due to a high number of projects and intense competition for physical resources and labour driving prices up. The Index reveals the significant construction disparities that exist across Europe. Amsterdam, Paris and Stockholm have new build data centre construction costs which are up to 10% lower than London. A key construction trend throughout Europe – and one set to continue in 2018 – is the prominence of UK and Irelandbased contractors securing projects across mainland Europe, particularly where local data centre expertise is not available on a large scale. Contractors delivering new build facilities in Frankfurt can expect to command margins of up to 15%, while average margins for new builds in London are 6.5%. Turner & Townsend,

Cloud computing leaders announce new Cloud Analytics Academy AWS, Looker, Talend and WhereScape have teamed up with Snowflake to launch the Cloud Analytics Academy, a training and certification programme for data professionals who want to advance their skills for the technology and business demands of today’s data analytics. The academy provides online training courses for technical experts, business leaders, analysts and business intelligence professionals, who can receive up to three Cloud Analytics Academy certifications upon completion. Anyone who completes all three tracks will achieve Academy Master certification. The adoption of cloud computing continues to evolve at a rapid pace. Gartner expects the worldwide software-as-a-service (SaaS) market to grow more than 63% by 2020. The Cloud Analytics Academy is designed to provide data professionals with the most up-to-date information, training and best practices for cloud analytics. Snowflake’s chief technical evangelist and Cloud Analytics Academy curriculum designer, Kent Graziano said, “The demand for cloud data analytics professionals continues to grow at an astounding rate as organisations migrate to the 8 | December 2017

cloud and look for new ways to gain insights from their data. The Cloud Analytics Academy is the perfect environment for high achievers to sharpen their skills, expand their knowledge and become go-to experts in building data solutions for the cloud.” The Academy curriculum features experts from Snowflake, Amazon Web Services, Looker, Talend, WhereScape, Duo Security, Age of Learning, Sharethrough and YellowHammer. Cloud Analytics Academy,

industry news

New data centre white paper examines the global trend of ‘hyperscale’ data centres As data becomes an organisation’s most important asset and the world enters the era of ‘hyperscale’ data centres, a new white paper The Future Data Centre has been published. Analysing the evolving data centre landscape, the white paper, commissioned by Enterprise Ireland, the Irish Government agency, and written by Nick Parfitt, senior global research analyst of the Data Centre Dynamics Group, examines the trends in data centre procurement, design and construction. The white paper explores some important questions that are emerging for the data centre market, both globally and in the UK. The white paper has been commissioned by Enterprise Ireland, the Irish Government agency, as part of the international campaign, which highlights the advantages and capabilities of Irish companies to international partners and clients. Ireland’s strength as a location for data centres has led to the development of a worldclass cluster of companies with an unparalleled competency in data centre design, build and fit-out. Nick Parfitt, senior global research analyst, Data Centre Dynamics Group and the author of the white paper said, “Data centres represent the foundation of the digitalised

world. The processes of initial design and construction are key to maximising the opportunities and minimising the risks associated with data centre investment, as well as building in the technologies that will deliver on key requirements such as resilience, efficiency, scalability, flexibility and security. The quality of what is designed and built today will impact the future scope of the digitalised era.” Enterprise Ireland,

ULTIMATE FLEXIBILITY FOR CRITICAL POWER NEEDS Protectplus M400 – the latest addition to the AEG Power Solutions range of modular uninterruptible power supplies, is based around a 10 kVA power module and can be scaled up to 40 kVA. Up to

four frames can be operated in parallel achieving a power capacity of 160 kVA. Protectplus M400 has one of the lowest Total Cost of Ownership (TCO) factors in its class. You can trust AEG PS to

DCW Frankfurt 28 – 29 November

Booth 570 – see the newest additions to our range

protect your mission critical business, IT and data center systems. Our UPS systems are world class and backed by engineering experience developed over the last 100 years. I +44 1992 719200 I

December 2017 | 9

industry news

Nimans and Greenlee inspire fibre knowledge boost Nimans’ Trafford Park Manchester trade counter has hosted a successful focus day about the growing influence of fibre connectivity in the data infrastructure arena. Held in partnership with fibre specialist Greenlee Communications, industry experts were on hand to demonstrate the latest equipment which included fusion splicers, OTDR meters, fibre inspection, fibre tooling, visual fault locators, light sources and USB power meters. Greenlee Communications regional sales manager, Nicholas Coyle, said the day was a resounding success as it reinforced the high-profile role fibre now plays in the data infrastructure sector, as it rapidly becomes the connectivity method of choice. The initiative attracted a steady stream of visitors including a Worsleybased reseller who commented, “I’ve never dealt with fibre before but as an engineer it’s definitely something we are

looking to embrace as the whole market is moving down that road. “An event of this nature is invaluable in boosting knowledge and opening our eyes about the type of products and

options available. I’ve also been pleasantly surprised about how cost effective the Greenlee solutions are, whilst the local support of Nimans is second to none.” Nimans,

Digital Realty tops Cloudscene’s Fast 50 most resilient data centre operators worldwide Cloudscene has revealed 2017’s Most Resilient Data Centre Operators in its latest Fast 50 announcement. Cloudscene analysed the resilience of 150 global data centre operators against Tier Certification data available through Uptime Institute, specifically in the areas of design, constructed facility and operational sustainability, to shortlist the top 50 service providers applying Cloudscene’s own weighted formula.

Using this methodology, Digital Realty secured the number one position with a small lead ahead of second-placed Cyxtera. ViaWest, Metronode and Telefonica secured the third, fourth and fifth positions respectively. “We are very pleased to have been recognised for the results we have achieved with regard to ensuring uptime for our customers,” said Danny Lane, senior vice president of Global Operations at Digital Realty. “Resiliency is at the core of our business, and reflects the superiority of the locations we select and the data centres we construct. Most importantly, however, it depends on the people we bring in to ensure high quality of the design, maintenance and ongoing improvement of the facilities and their operations,” commented Danny Lane, senior vice president of Global Operations at Digital Realty. The top ten companies in 2017’s Fast 50 Most Resilient Data Centre Operators were: Digital Realty; Cyxtera; ViaWest; Metronode; Telefonica; Ascenty; NEXTDC; Equinix; Fujitsu and KioNetworks. Cloudscene,

10 | December 2017

Reduce Operational Costs, Improve Capacity Utilization, and Lower Power Usage Effectiveness (PUE)* Driven by explosive data processing growth, Data Centre Managers face multiple, competing demands: reducing operational costs, improving energy efficiency, and optimizing available capacity, while sustaining a low total cost of ownership. To meet these demands while minimizing the risk to service levels, the available data centre space is often underutilized while being overprovisioned with excess power and cooling capacity regardless of actual IT equipment and space utilization. Today, a typical data centre consumes about 3-5kW per cabinet due to power and cooling concerns, while the available cabinet space can accommodate 15kW or more per cabinet if managed effectively.

As energy and construction costs continue to rise, over-provisioning and under-utilization are no longer sustainable. Energy costs related to cooling account for approximately 37% of the overall data centre power consumption and are one of the fastest rising data centre operational costs. RWL Advanced Solutions and Panduit offer a best of breed data centre infrastructure which includes energy efficient industry leading cabinet and connectivity solutions.

Visit data-center.html for further information

RWL Advanced Solutions UK 9 Devonshire Square, London, EC2M 4YF. +44 (0)207 084 6219

on the cover

Staying Power Metartec has launched the MEGA M – giving your data centre nine 9s of availability.


etartec has expanded on its offering by launching a new modular UPS system, the MEGA M, designed to provide a fully modular solution with the highest power availability of any UPS in the market to date. The MEGA M is designed predominantly for the data centre market and available from 10kVA to 3000kVA/KW. Using decentralised active redundant architecture (DARA™), the MEGA M reduces the potential of human error and downtime, which provides greater confidence to support customers loads in data centres. With DARA™ technology, the MEGA M responds to the highest of requirements in the data centre market by increasing efficiency, MTBF, and most importantly, removing single points of failure whilst still using proven designs and technology. The MEGA M range delivers reduced total cost of ownership (TCO) with: Reduced upfront investment •P  ay as you grow (modular flexibility) •H  ot swappable UPS modules Minimised operational costs •9  7% efficiency •R  educed cooling costs •M  aximum efficiency management (MEM) Minimised lifetime maintenance costs •D  C caps designed for 10y •E  asy and fast AC caps and fan substitution

12 | December 2017

This UPS’ features emphasise on further enhancing a fully modular solution and increasing power availability to nine 9’s, whilst still showing a versatile approach to configuration and scalability. The MEGA M range continues Metartec’s commitment to high quality products and affordable cost of ownership. This is why there has been a focus on site modification and the examination of customers’ power needs both in the present and for the future. Some of Metartec’s services include UPS, generators, and switchgear, which has meant that over the last 10 years, Metartec has become a specialist, trusted supplier to some of the largest companies, including the world’s largest research-based pharmaceutical company. Metartec’s aim is for each customer to have the ultimate experience from start to finish. Over many years Metartec has delivered high quality bespoke products, designed to meet and exceed an everchanging critical power market. This flexible approach to project delivery, in line with a differentiated service offer, delivers customers an excellent technical solution and personalised support. Metartec, T: (GB) 0845 504 0444 (ROI) 1 890 818 188 E:


Meet the ultimate beast of power provision

DESIGNED TO DRIVE DATA CENTRES | UK 0845 50 40 444 | ROI - 1890 818188

centre of attention

Bridging The Gap Gary Quinn, CEO at FalconStor, gives us his insight into bridging the generation gap when it comes to storage.


eeping up with new developments is vital in many spheres and technology is no exception. If anything, innovation in the technology world is supercharged. For example, it’s only 16 years since the iPod was launched. Now, despite the huge shocks and changes it wrought on our music listening and purchasing habits (and the incredible disruption it caused to the music industry), the MP3 player’s time has come and gone. It’s barely 10 years since the smartphone finally started to achieve its full potential with the launch of the iPhone, just over seven since the iPad made tablets a reality and both products helped to take mobility, data creation and data access to a different level.

14 | December 2017

During those 16 years or so, baby boomers have had to learn to adapt to significant upheavals in technology and how it affects their lives. The process has been easier for Generation X, but even they have witnessed huge changes. By contrast, millennials have been frontline adopters of so much of these innovations because they have little, if any, remembrance of the technologies they have replaced. The same is true for storage. Baby boomers have had to slowly accept that legacy storage is not up to the task in today’s world as they witness major upheavals in how data is created, stored, accessed and protected. They have experienced the shift from monolithic physical IT infrastructures to mixed physical and virtual environments. Now,

they have to contend with the emergence of cloud and the prospect of a mixture of private and public cloud, on-premise and hybrid environments. For Generation X, the changes have been less profound as they adapt to the new waves of innovations because they have been more involved in the development of modern technologies that aim to help organisations, not hinder them. As for millennials, they have taken it a step further by driving changes and advancements in technologies, such as self-service and automation. The challenge for IT administrators is to implement innovative solutions that can satisfy all three generations. From a storage point of view, this means

centre of attention

being able to manage and protect mixed environments composed of physical and virtual assets and equipment from multiple vendors. In terms of data protection and recovery, technology needs to be optimised for lightning fast recovery in heterogeneous environments and to deliver failover and failback between dissimilar hardware. Modern solutions should also enable organisations to be more flexible with their data storage and recovery. IT administrators need to adapt to a technology environment that has moved beyond the traditional approach to storage infrastructure and provide features that can intelligently move, store, protect and analyse data without burdening production resources. In today’s corporate environment, all three generations require their organisations to implement simple but powerful data mobility solutions that enable business agility and maximum uptime with minimum effort. To satisfy those requirements, administrators need to keep

IT services up and running by moving, storing and protecting vital business information while enabling data movement and multiple uses of stored data. Storage virtualisation, data protection and recovery solutions have to be flexible and scalable, they need to be adaptable to any size data centre, using any combination of physical and virtual servers and a wide variety of storage hardware. The business landscape consists of a mixture of baby boomers, Generation X and millennials, and the technology that underpins it reflects that. Different generations of technology innovation are mixed and blended together to deliver the best solution for business. IT’s job is to ensure those different technologies can store, protect, recover and deliver the data that is the lifeblood of any organisation to the satisfaction of everyone involved. Of course, there will always be a gap of some description between the generations, but data shouldn’t be part of it.

“The challenge for IT administrators is to implement innovative solutions that can satisfy all three generations.”

December 2017 | 15

meet me room

Eric Schwartz – Equinix Eric Schwartz joined Equinix in 2006 and has played various senior-level management roles with the company. He spearheaded Equinix’s expansion to Europe, including the 2007 acquisition of IXEurope, and the ongoing integration of Equinix’s European operations into its global business. As president of Equinix Europe, Eric oversees the management, strategy, and growth for Equinix in Europe. What are the biggest changes you have seen in the data centre industry? In less than 20 years, the data centre industry has gone from being comprised of dozens of niche providers, to the domination of a small number of major global players. The dynamics of the relationship between data centre providers and their customers have also changed significantly. Enterprises are looking for far more than data hosting – they want relationships with a partner that gives them the edge over their competitors.

The industry culture has also changed. There are various support elements in the industry that didn’t exist before, such as specialised financiers, recruiters and engineers, which didn’t exist 10 years ago. Now we even have publications and magazines like DCN dedicated solely to data centres! These resources have helped the industry’s players to take on large-scale projects, while also bringing the sector and the positive impact it makes into the general public’s attention. Just 10 years ago, government was paying very little attention to the industry – now we see governments actively courting data centre developments and installations because they understand just how much they contribute to the digital economy. What is the main motivation in the work that you do? There are two things for me. The first is the strategic challenge of building a business in a rapidlychanging digital environment. With so many factors affecting us – from constant regulatory changes, to technological innovation leading to increased data consumption – to help build one of the most successful companies in the industry is a huge accomplishment that gives me a great sense of pride. The second is building a team of committed and energised people who are excited about what they’re doing. We’ve done a great job in this respect so far, but

16 | December 2017

it can become more challenging as a business grows – it’s my job to make sure we don’t lose that excited, entrepreneurial spirit, no matter how large we get. In addition to earning a living, how else has your career created value in your life? Not only has Equinix given me a sense of purpose and a huge drive to succeed, it has broadened my horizons and given me the opportunity to live and work in Europe. One of my favourite things to do in my spare time is studying anthropology and history, and I find the cultural complexity and historical richness of Europe fascinating. Living in The Netherlands has given me the opportunity to experience life in a different part of the world and to learn from the many different cultures across Europe. It’s been tough at times, being so far from home – but living here has been an extraordinary experience that has changed my perspective on the world. What part of your job do you find the most challenging? The sheer speed of change is always challenging; there’s so much happening at such a fast pace. There is perhaps no other industry that’s so young, yet plays such a huge role in the global economy, as the data centre industry. As president of EMEA, it’s my job to provide a sense of direction to

meet me room

the company’s operations in the region. With so many angles to consider and so many factors at play, it can be challenging to filter out the noise and focus on what’s most important – but it’s also what makes the job so exciting! What are the biggest pressures involved in your job? One of the biggest pressures is staying connected with everyone as the company gets even larger. Equinix, like any company, is built on personal relationships, people working together, and being committed to a common goal. As the company expands, it becomes more and more important for me to stay connected with everyone across the business and make sure we remain aligned and focused on the job at hand. What is the toughest lesson you have ever been taught in your career? I’ve always been the sort of person who wants to do everything myself, so one of the hardest lessons through my career has been learning to let go. But you soon realise that you can achieve so much more as a team than as an individual – and to be honest, when you have a team around you as talented, ambitious and experienced as the one we have here, it’s something you learn very quickly.

What gives you the greatest sense of achievement? When I inspire someone to pursue a new idea. There’s nothing like the feeling when a member of the team comes to me with a new idea that they’ve thought of as a result of my work with them, then gone away and figured out how to implement it. It’s those moments that really get you out of bed in the morning. If you could possess one superhuman power, what would it be and why? Definitely the ability to function without sleep! I’ve pulled many an all-nighter in my time, but it would be so much better to not have to sleep at all. Imagine all the things you could get done!

“There is perhaps no other industry that’s so young, yet plays such a huge role in the global economy, as the data centre industry.”

Image courtesy of Ken Ratcliff

The east coast of Georgia, one of Eric’s favourite holiday destinations

What’s your biggest pet peeve? People with poor presentation skills! You spend so much of your time in meetings and presentations in business – listening to people reading verbatim off a 60-slide long deck really does drive me crazy. I feel like everyone should have proper presentation training and be able to deliver a presentation that engages your audience. If you’re going to ask people to pay attention to you for any considerable period of time, you need to keep people engaged. Where is your favourite holiday destination and why? I may have moved across the pond, but I have to say that one of my favourites holiday spots is still the southern United States, specifically the east coast of Georgia. There’s a group of barrier islands right by Savannah, which have the most beautiful beaches and marshes. Savannah’s a charming town that’s full of history and southern culture, but the islands are the perfect destination to get away from the hustle and bustle of the city. Savannah is also (supposedly) one of the most haunted cities in the U.S. too, which just adds to the excitement! December 2017 | 17


Are You Available? Michael Brooks, managing director at MPower UPS, walks us through the benefits of a modular UPS system and the steps you can take to ensure your system’s availability and reliability.


ata Centres purchase uninterruptable power supply (UPS) systems because they have a critical load that requires clean, continuous power. Although reliability is a key factor in power protection system design, the main purpose of the UPS is to maintain availability. Power protection systems must be available every second of every day, therefore maximising system availability must be the overriding objective of any system installation. Various elements of system design can affect availability, including the technology and configuration employed. Whilst transformer-less technology has become mainstream today, the

18 | December 2017

market is increasingly moving towards implementing modular UPS design. Mid-range three phase modular systems are the fastest growing market. This is because, properly configured modular systems simultaneously maximise load availability and system efficiency. In the future, we anticipate flexible, modular systems will increasingly replace traditional stand-alone and parallel systems due to the drive for high availability, fast repair and commonality of parts, as well as reduced system footprint. The fact is that a UPS system can be extremely reliable, but when a fault eventually does occur, then the system can fail completely and lose load power or transfer to

bypass, leaving the critical load vulnerable on raw mains. A simple power cut could then compromise availability, leaving the data centre without critical power.

A different approach Modular systems offer an alternative approach. They have a single frame, containing a number (N) of power modules, all operating together and sharing the load equally between all modules. By utilising a true N+1 configuration, a failure in one module simply results in that module being isolated, leaving the remaining modules supporting the load and maintaining the all-important availability. In other words:


customers are not affected, staff can keep working. We have recently introduced the new Centiel CumulusPower UPS, incorporating Distributed Active Redundant Architecture which provides a vast improvement over previous system designs. Each module contains all the power elements of a UPS – rectifier, inverter, static switch, display – and critically – all control and monitoring circuitry, unlike other current designs that have a separate, single static switch assembly and separate intelligence modules. The single, separate static switch module as used in some of the most common modular systems, is of most concern as all load power must pass through it, whether the system is on inverter or on static bypass – it becomes a single point of failure. A further issue with some existing modular designs is that the synchronisation, current sharing and control communication between the different power modules, intelligence modules and static switch modules are at risk of disruption by a failure in any one of many components within the communication loop.

Maintenance Regardless of the UPS purchased, the system must always be in peak operational condition. Because all systems contain both electrical and mechanical components which degrade over time, it is essential that they receive routine preventative maintenance inspections. Some components (e.g. cooling fans, capacitors and batteries) have a finite working life and will require proactive replacement if a UPS system’s availability and reliability are to be maximised. For example, if sites do not have regular maintenance checks, capacitor failure can be

a very real risk. In years gone by, you would see a capacitor fail catastrophically a few times a year. I’m pleased to say we have all but eradicated this amongst our customers as we encourage changing capacitors every five to eight years (depending on manufacturer and site conditions). Customers often question us as to why we are so keen to do this? I recently conducted a controlled experiment in a safe, cordonedoff outdoor space to show what happened as we slowly applied a DC voltage to an old capacitor. In this way, we were able to replicate what happens when a system is starting up. At a value well below the normal operating voltage, the results were quite dramatic. Not something you want to happen on a customer site. The good news is that when it comes to modular systems such as CumulusPower, maintenance is made much simpler. A module can easily be removed from the UPS frame, while leaving the remainder to support the load. This obviously eliminates the risk to the critical load by being on mains, but also eliminates the risk of human error while carrying out any switching

“Power protection systems must be available every second of every day.”

procedures between UPS and external bypass. In our fast moving world, future-proofing systems is one of the greatest challenges faced by systems designers. A further advantage of modular UPS systems is that they can be quickly and easily reconfigured to adapt to changes in load requirements over time. This not only ensures the highest efficiency is maintained, but more importantly it guarantees availability of power protection whatever the future holds.

Various elements of system design can affect availability, including the technology and configuration employed.

December 2017 | 19


Haven’t got the energy? With the introduction of GDPR looming, much of the discussion has been about data breaches and the subsequent punishments for businesses. However, what’s not often considered are other critical factors of data maintenance and protection such as UPS and energy storage. Dr. Alex Mardapittas, CEO of Powerstar, explains.


ith increasing amounts of data, and an additional focus on data protection and preventing data breaches, security is becoming an ever-important consideration for data centre operators. Under GDPR regulation, companies could face huge fines of 20 million euros or 4% of global turnover, whichever is the greater, should any form of information breach occur. The legislation will also remain in place when Britain leaves the European Union. Whilst the spotlight of such breaches is often focused on security, there is another critical threat to data centres – power. According to a recent study from Emerson and Ponemon Institute, the average cost of an unplanned data centre outage – including equipment repair or replacement,

20 | December 2017

IT and end-user productivity loss, third parties (such as consultants), lost revenues and the overall disruption to core business processes – had steadily increased from £384,000 in 2010 to £562,000 in 2015. On average, each incident cost nearly £6,800 per minute. Perhaps more concerning for data centre operators, the scale and regularity of supply issues are increasing, which is noticeably visible across the news. A recent major global computer system failure triggered by a power failure, for instance, caused cancellations and delays for thousands of passengers of a leading airline – whilst also putting critical data at risk. With data centre energy consumption set to rise dramatically over the next decade – with experts predicting that such facilities will require almost three times as much energy

over that period – companies simply cannot run the risk of any unnecessary downtime.

A powerful solution It has, therefore, become increasingly clear that data centres must have measures in place to secure facilities. A bespoke-engineered smart solution that can supply a stable level of electricity whilst reducing electricity consumption and CO2 emissions, will effectively mitigate any risk and ensure a sustainable future for businesses. To avoid any power quality supply issues such as blackouts, brownouts, voltage spikes and dips, critical facilities, such as data centres, must also have a secure, responsive and reliable backup power or Uninterruptable Power Supply (UPS) functionality.


Whilst serving a critical purpose for data centres when outages strike, traditional technologies such as combined heat and power (CHP) units or generators are both ageing systems in a modern connected world. Even though they offer sufficient backup power, such processes are failing to meet the requirements of the digital era. In a world where messages can be transferred almost instantaneously, CHP or generators do not have the capacity to instantly allow power to be provided for a facility, such as a data centre, when required.

A new solution A more modern solution for providing UPS has come to the forefront – battery-based energy storage technology. Already in operation in many data centres around the world, the technology provides a host of UPS benefits. In essence, the solution provides an up-scaled version of what you would find on any chargeable electrical device, with the charge coming from the National Grid or ideally, renewable sources. It can then discharge energy almost immediately to a facility, when required, providing a vital electricity supply for a period of up to two hours. For data centre managers, the benefits of energy storage are abundant. By continuously measuring and monitoring the electrical supply to a facility, the integrated solution recognises when support is required, such as when blackouts or brownouts occur, and provides additional energy to ensure no outage occurs. This is particularly important for data centre sites that require a constant energy supply to critical equipment, such as computer systems, servers and storage subsystems. The complexity and technological demands of the

“The cost of the average unplanned data centre outage is almost £6,800 per minute.”

modern-day data centre require a fresh approach to UPS, especially given the increasing use of connected Internet of Things (IoT) IT equipment and the rise in the volume of data stored. Energy storage is making significant steps to respond to these industry requirements by providing a ‘future proof’ solution that can be commissioned and engineered for any data centre. The flexibility of the technology ensures it can understand the fluctuating needs of any site, with scale increased by additional batteries if required. Energy storage is not simply a scalable UPS solution; the technology also has further energy saving benefits available which have been realised by a host of high-energy use sites, negating rising energy costs and making grid contracts readily available for users.

DUoS and Triad tariffs An upsurge in the National Grid’s DUoS (Distribution Use of System) and Triad tariffs have proven major contributors to the increasing energy storage requirements of data centres. Businesses are rightly concerned about consuming energy at periods of high demand throughout the day, given that such tariffs can potentially account for up to 24% of a company’s energy bill. In the past, faced with such charges, some sites simply switched off energy use during these peak times. However, it is not always possible to take such a dramatic step, especially in today’s

technologically advanced world and particularly within the data centre industry where power is required to protect critical data. By using energy storage technology, data centres can instead switch to a cheaper stored energy supply when necessary. A similar process can also be carried out to counter Triad charges. Even though Triad periods are not known in advance, unlike DUoS, the large amount of historical data and other techniques makes it possible to accurately predict the events and utilise stored energy during intervals of highest demand. Using energy storage, data centres can also take part in Demand Side Response (DSR) contracts, in which the National Grid pays end-users for reducing electricity demand, alongside providing support on the grid. As storage technology is connected to the National Grid, batteries can provide instant discharge when the grid requires support. As a result, a bespoke-engineered solution will ensure businesses successfully respond to the majority of all DSR events. As data centres become increasingly important, it is vital that these facilities have bespokeengineered energy solutions that can both provide UPS functionality and also assist businesses in avoiding ever-rising energy costs. Battery-based energy storage technology provides a broad range of benefits to data centres including scalability, future security and a host of financial benefits.

For data centre managers, the benefits of energy storage are abundant.

December 2017 | 21

next issue

storage, servers & hardware


Next Time‌ As well as its regular range of features and news items, the January issue of Data Centre News will contain major features on storage, servers and hardware and GPDR. To make sure you don’t miss the opportunity to advertise your products to this exclusive readership, call Kelly on 01634 673163 or email

data centre news

22 | December 2017


Battery Power Can lithium-ion batteries support your Uninterruptible Power Supply? Vertiv’s Giovanni Zanei, director of product marketing, AC Power, EMEA, is here to answer that very question.


raditionally, the most common battery type for Uninterruptible Power Supply (UPS) systems has been the standard ValveRegulated Lead-Acid (VRLA) solution. For some time VRLA has been the go-to solution in the industry, largely because it’s easily adaptable to many applications for both technical and cost reasons. Until recently, the adoption of lithium-ion batteries in the UPS market was restricted to very specific installations due to their high price. But today, the price of lithium-ion, compared to other battery solutions, has fallen significantly, making it an increasingly popular option for data centre operators.

What sets lithium-ion batteries apart from VRLA types? Essentially, lithium-ion batteries are fast rechargeable, low maintenance batteries which have become increasingly popular in the last few years. Unlike other batteries available on the market today, lithium-ion batteries have a number of advantages. Firstly, they are 70% smaller and considerably lighter than VRLA counterparts, which means more space can be allocated to servers. Lithium-ion also has a longer battery life which makes them attractive solutions to adopt over VRLA. They have a higher number of charge and discharge cycles, and, depending on chemistry and usage,

the product lifespan of lithium-ion batteries can be up to three times longer than VRLA. But that’s not all. Lithium-ion has the added bonus of being able to recharge quicker and hold its battery power for longer than VRLA solutions as well. These advantages, coupled with its now competitive pricing, have drawn businesses towards their adoption.

Making its mark in consumer products While the benefits of lithium-ion batteries in the UPS market have been recently recognised, lithiumion is not a new technology. Over the last decade, the technology has been a widely used solution in December 2017 | 23


the consumer electronics market, particularly within mobile phones, tablets and laptops. The batteries offer a compact, lightweight, long-lasting energy storage solution – all of which has contributed to their widespread adoption in mobile devices. In very recent years, the same characteristics have expanded the application of the technology to electric cars and aviation.

Lithium-ion in the critical infrastructure industries Looking at the critical infrastructure industries, telecoms and renewable energy are the markets using lithium-ion batteries the most. In these sectors batteries are often used to protect critical equipment from power loss or power instability as they can provide an instant power supply to whatever device they are connected to. 24 | December 2017

Lithium-ion batteries are available for new builds, replacing existing battery types or modular constructions. Therefore, they can significantly reduce the footprint of the battery cabinets, and in some cases, enable in-row battery storage. To give an example, among our most recent projects in EMEA, we recently configured two 800kVA Liebert Trinergy Cube UPS units with eight strings of lithium-ion batteries. The system, designed for a leading, global fashion company based in Europe, can be easily scaled to 2.4MW to keep up with growing data centre needs.

What to consider when making the switch When looking to implement a new battery solution, cost is invariably a consideration. However, the industry should factor in Total Cost of Ownership (TCO), not just Capital Expenditure (CAPEX).

“Over a ten-year life span, lithium-ion batteries allow up to 40% savings in OPEX compared to VRLA ones. ”

While the initial CAPEX is higher than VRLA batteries, the reduced costs of running and maintaining lithium-ion batteries makes them an attractive option for IT managers looking to install future-proofed infrastructures. In fact, over a ten-year life span, lithium-ion batteries allow up to 40% savings in Operating Expenditure (OPEX) compared to VRLA ones. In addition, the requirements of each data centre will impact the decision-making process. As lithium-ion batteries have a higher power density, the batteries occupy much less space than their lead counterparts. Thus, the reduced footprint and mass reductions means the technology can accommodate for buildings with multiple floors, and mitigate space or weight capacity issues. Finally, the decision to deploy any new battery type should involve a conversation with your supplier. Our recommendation to customers is to discuss application requirements, safety concerns and regular maintenance with their providers and ensure their expertise is delivered through routine servicing. Above all, if you are thinking of implementing lithium-ion – or any new battery type – always seek expert advice from suppliers or manufacturers. Different solutions will work for different requirements, and when it comes to choosing the right battery for your UPS system, there is no ‘one size fits all’ policy for each customer. Traditional VRLA types will not become obsolete in the short term, but the lower TCO for lithiumion cannot be ignored. At Vertiv, we offer new lithium-ion energy storage solutions for several UPS systems across the globe. While standard batteries are still widely applied, the benefits of lithium-ion are so great we expect it to be a solution more businesses will start to take up.

THE EUROPEAN DESTINATION FOR THE GLOBAL AV INDUSTRY Building the Intelligent Homes and Cities of the Future Experience Smart Building technology and solutions at ISE 2018


hybrid cloud

Buy or Build? To buy or to build? That is the question. Mark Baker, field product manager at Canonical, is here to help un-muddy the waters.


usinesses today are constantly trying to keep the innovation tap flowing. Factors such as rapidly changing competitive landscapes and developing consumer demands mean those firms that fail to innovate are likely to fall behind. To stop this from happening, businesses in all industries are turning to cloud computing technologies and most are now realising that a hybrid – or multi-cloud – approach is the best way forward.

26 | December 2017

A hybrid cloud strategy effectively gives organisations the ability to rapidly develop and launch new applications, adopt agile ways of working and run workloads in whichever way best meets their specific needs. For example, workloads that aren’t business-critical can be deployed externally, while those that hold sensitive customer data can continue to be run in-house in order to meet compliance requirements – an issue especially relevant for organisations in highly regulated sectors such as

telecommunications, healthcare and financial services. A hybrid cloud strategy also gives businesses the option to move workloads between environments based on cost or capacity requirements and application portability means they are not locked into one platform. However, not all firms are embracing third party providers. The desire to innovate as fast as possible means that, when it comes to the decision of going with a public and private cloud, many businesses are building

hybrid cloud

when they should be buying. But is that the right move?

Navigating cloudy skies There is little doubt that a hybrid approach will be best bet for most organisations, but the right mix of public and on premises cloud infrastructure is up to your business requirements. As previously mentioned, if a business is heavily bound by compliance and regulatory constraints – and if they have the capacity to manage and maintain their own data centres –  then a lean towards private cloud is probably the right option. However, for the majority of organisations, the advantages offered by public cloud providers are simply too good to ignore. For example, many businesses tout the cost effectiveness and flexibility benefits that public cloud provides, while others want to be able to access new services, so are keen to tap into the fast pace of innovation that public cloud providers have become known for. Outsourcing certain workloads also means organisations don’t have to deal with technical issues that may arise with the underlying platform, instead handing off that responsibility to a team of experts and freeing up time for their own developers to focus on other things. Whilst the benefits of public cloud are clear, many businesses that still need private infrastructure are choosing to build rather than buying from the experts. A common misconception is that just because they can build, they should build. But building infrastructure that works and building infrastructure that really makes a difference to the business are two entirely different things. Developing, managing and operating bespoke cloud infrastructure is hugely complex and a substantial proportion

of organisations simply don’t possess the technical skills to be able to achieve their lofty aims by themselves. Furthermore, in some cases it’s worth considering what you’re asking your developers to spend their time on. If you’re buying your cloud from someone else, the developers have the time to innovate higher up the stack and potentially make more meaningful contributions to the business. In comparison, if your developers are tasked with the main build they will likely spend most of their time on the base layers, and have less opportunity to add value by building new tools and services. This level of focus is unlikely to be attractive to potential hires. Experienced developers are not easy to come by and will be more attracted to companies that offer them opportunities to be creative and experiment with new services. Speed also needs to be a consideration. Public cloud providers are known for innovating rapidly and rolling out new features on a regular basis, a characteristic which is extremely hard to replicate to the same level of quality with an in-house developed platform.

A common misconception is that just because you can build, you should build.

“When it comes to the decision of going with a public and private cloud, many businesses are building when they should be buying.”

In or out? So, what’s the answer? Should businesses be looking to leverage the innovation and speed-to-market benefits offered by public cloud providers, or should they aim to keep things in-house and maintain control over their workloads? Unfortunately, the simple truth is that there is no one-size-fits-all approach. Organisations should consider what is the best option for them, depending on factors such as the type of data they hold and their performance needs. But one thing we can be certain of is that attempting to build cloud infrastructure without the appropriate levels of planning, investment and – most importantly – expertise will cause more harm than good in the long run. If an organisation can’t meet the demands, it makes more sense to outsource the job to third parties that have done this many times before using standardised tools and technologies. Very few businesses actually need to build and operate their own cloud infrastructure. Public cloud providers and vendors or specialist on-premises cloud platforms can help to ease the strain, providing the skills, experience security and innovation to help businesses flourish. December 2017 | 27


Well-Aged Nick Claxson, managing director at Comtec Enterprises, explains how DCIM has matured, why we need it and why it will always maintain its value.


ike a bottle of fine wine, DCIM (Data Centre Infrastructure Management) has matured as a solution. And as successive evolutions of this technology have offered increased functionality, control and deployment flexibility, data centre managers have become progressively more sophisticated in how they leverage its value to greatest effect. The basic requirements for DCIM are for an automated, intelligent means of keeping a watchful eye over a complex, sensitive data centre estate. The main benefits can be summarised as:

28 | December 2017

• Greater data centre availability as risks to uptime are anticipated and alerted. • Better energy efficiency, translating directly into minimised running costs and environmental impact. • A centralised point of control for managing planned or unplanned equipment changes, cutting admin waste and freeing up more time for the data centre team to spend on strategic initiatives. • Feeding in to better business decision-making via real-time, visualised data about the status of the data centre environment and its changing impact on resources.

But while the top-line benefits of DCIM are broadly understood, the motivations for implementing a DCIM strategy vary enormously. Greenfield data centres sporting the latest designs and technologies invariably come specced with DCIM as standard, which prevents data centre managers from having to put too much thought into the business case for using it. However, in other more mature scenarios, DC pros are seeking compelling reasons to either implement DCIM for the first time or – crucially – upgrade their legacy DCIM solutions to the latest version.


Multiple business drivers

CSR targets

Organisations adopt DCIM for a range of reasons; many of which reflect the unique circumstances of the data centre in question. However, in our experience, the following change events have appeared commonly in the reasons behind DCIM investment during 2017.

CSR (Corporate Social Responsibility) was a major issue in data centre thinking 10 years ago, when much of the industry chatter related to ‘green’ technology and the importance of minimising carbon footprints. Contrary to belief in some quarters, these issues have not gone away in this age of social media where an organisation’s reputation can be seriously tarnished at the mere suggestion of failing to safeguard the environment and reduce the harmful impact of energy waste. DCIM is central to not only achieving these objectives, but also in being able to prove it. Once again, the value of deploying DCIM is only partly in achieving an initial set of accomplishments. It makes its greatest difference when targeting the objective of continual improvement.

Cost reduction plans The classic decision criteria for DCIM is its value as part of wider cost reduction efforts that impact the IT and/or facilities department. Energy costs eat up a sizeable chunk of data centre budget, so DCIM is an attractive means for CFOs and FDs to leverage marginal efficiency enhancements that may net many thousands of pounds a year in savings. This feeds into the well-trodden path of seeking PUE excellence; something that DCIM is key to facilitating. DC managers are cottoning-on to the reality that PUE assessments shouldn’t just be done regularly, but on a continual basis. That’s where DCIM comes into its own.

Strategic risk evaluations Board-level business leaders are increasingly switched on about the importance of their data centre assets, and how much disruption could be caused if something bad happened to them. DCIM represents a convenient way of understanding and mitigating data centre-related risks, in the context of a fast-changing organisation. The other useful attribute of modern DCIM solutions is the active (rather than passive) role played in monitoring and finetuning key environmental metrics such as temperature and moisture, and the management of fire safety and physical security systems. Organisations without DCIM can’t manage these risks efficiently at any meaningful scale.

“DCIM represents a convenient way of understanding and mitigating data centre- Digital business transformation related More organisations are pursuing risks.” a digital agenda; transforming

their internal and customerfacing processes to become more responsive and relevant to market needs. This manifests itself within the data centre as significantly more dense, virtualised and software-driven infrastructure, based on open standards and geared to immense Gigabit speeds. The role of DCIM is to minimise the complexity of managing such a dynamic and organic data centre architecture, particularly with regard to ensuring maximum uptime and planning for sufficient on-demand capacity without disrupting services or attracting significant additional cost.

Governance and compliance needs Compliance and regulatory frameworks, especially within

highly-regulated sectors such as the banking and public sector, increasingly require data management safeguards that specifically call upon aspects such data centre resilience. DCIM has an important role to play in achieving compliance as part of an overall governance structure, giving complete visibility of management across the layers of IT from applications and data at the top, through to network-critical physical infrastructure at the bottom. Crucially, DCIM enables organisations to transparently demonstrate the effectiveness of its governance controls related to the data centre.

Greater business agility and performance We are seeing an increased interest among digital-savvy organisations to wield their data centre assets as a competitive weapon. By optimising data centre performance metrics, businesses can increase their agility and responsiveness, making them better equipped than their peers at taking advantage of new market opportunities and answering competitive threats. DCIM forms an important part of this approach, highlighting where improvements can be made while ensuring unplanned downtime is kept to zero.

Conclusion DCIM is adding value to more data centres as IT professionals gain a greater understanding of how to apply its intelligence and automation to business goals. Whether part of a new data centre build or expansion project, or in response to other business drivers, DCIM’s ability to deliver tangible improvements in availability, operating efficiency and management accountability are crucial to determining a compelling business case. December 2017 | 29

Disaster recovery

Prevention Over Cure Are availability zones a feasible disaster recovery solution? Richard Stinton, enterprise solutions architect at iland, give us his insight.


recently read an article which began, “You can’t predict a disaster, but you can be prepared for one!� It got me thinking, I can hardly remember a time when disaster recovery was a bigger challenge for infrastructure managers than it is today. In fact, with ever increasing threats to IT systems, a reliable disaster recovery strategy is now absolutely essential for an organisation, regardless of their vertical market. What does all this have to do with availability zones, I hear you cry? Furthermore, what is an availability zone and is it a good

30 | December 2017

disaster recovery strategy? The purpose of availability zones is to provide better availability while protecting against failure of the underlying platform (the hypervisor, physical server, network, and storage). They give customers more options in the event of a localised data centre fault. Availability zones can also allow customers to use cloud services in two regions simultaneously if these regions are in the same geographic area. Let us begin our discussion about availability zones by looking at the core capabilities that

provide availability and resilience. Dynamic Resource Schedulers (DRS) provide Virtual Machine (VM) placement. That is, which host should run a given VM? A DRS also moves VMs around a cluster based on usage in order to balance out the cluster. High Availability (HA) provides the capability to restart VMs on other hosts in a cluster when either a host fails, or a VM crashes for any reason. Now, let us look at the advantages that availability zones offer, as well as areas where they may fall short of constituting an effective disaster recovery strategy.

Disaster recovery

This analysis of availability zone effectiveness will be divided based on three key challenges that cloud providers face: Handling crashes or downtime, performing maintenance, and offering sufficient storage.

Crashes or downtime It is not unusual for a cloud provider to only offer HA and not DRS. In this case, in the event of a host hypervisor crash or deliberate shutdown, VMs are restarted on other hosts because they have shared storage. This is done using initial placement calculation. However, providers often do not have the ability to move a running VM between hosts in a cluster with no loss of service, and to incorporate such a DRS capability would strengthen disaster recovery preparedness.

Maintenance There is also a problem with this model around planned maintenance. When hosts are updated, it is not possible to move the VMs that are running on them without loss of service. Therefore, VMs occasionally have the rug pulled out from underneath them. With this in mind, many service providers talk about a ‘Design for Failure’ model when designing resilient services. In a

replicated storage. It is also important to note that replicating storage to another availability zone or region only protects against storage subsystem failure. It does not protect against storage corruption, accidental deletion, or recent threats such as ransomware encrypting the files within the storage. To that extent, it is not creating a disaster recovery solution.

nutshell, this means designing cloud infrastructure on the premise that parts of it will inevitably fail. Resiliency is provided at the application level. At the very least, this requires the doubling up of all applications, and for many deployments this necessitates additional licensing and additional costs for the VMs themselves.

Storage Another crucial area to factor into this analysis is persistent storage. In the past, storage was protected using RAID techniques. Yet as we move to the public cloud, object storage has appeared as a popular way of storing data. This method uses the availability zone topology to protect data — but only if you choose it and pay for it. To protect against individual disk failure, three copies of the data are spread across the storage subsystems. For virtual machines requiring persistent storage, elastic block storage (EBS) is often used, and is replicated within the availability zone to protect against failure of the underlying storage platform. EBS storage is not always replicated to other regions. Regardless, having data replicated to another region does not mean that the VMs are available there. It only guarantees back-up storage. VMs would need to be created from the underlying


“As we move to the public cloud, object storage has appeared as a popular way of storing data.”

A reliable disaster recovery strategy is now absolutely essential for an organisation.

So, we return to our original question: Can availability zones theoretically offer the resiliency needed for a good disaster recovery strategy? In the event of a crash, dynamic resource schedulers can be used to move a VM between hosts in a cluster with no loss of service. However, when hosts are being updated, it is very difficult to move the VMs that are running on them without loss of service. As we have just discussed, redundant storage does not guarantee VM availability in other regions. Most importantly, these capabilities do not protect against data corruption or threats such as ransomware that encrypt data. Given this, a disaster recovery solution should be implemented in addition to the use of availability zones. Cloud-to-cloud Disaster Recovery as a Service (DRaaS) can be adopted between data centres. This means that you can recover data if it is lost or corrupted; for example, you can recover data from a ransomware attack. Self-service testing can also be carried out whenever required, while replication carries on in the background. As customers think about migrating their traditional virtualised services to the public cloud, they need to consider crashes, maintenance, storage, and also a disaster recovery strategy. December 2017 | 31


Tomorrow’s World David Trossell, CEO and CTO of Bridgeworks, discusses how we can achieve the data centre of the future and debunks the jargon.


ith growing data volumes, data centres need to evolve. Data centres are known in some circles as the ‘digital core‘, and as such they are having to adapt to an increasing array of different types of technology. Large technology vendors also tend to say they care about their customers, and perhaps they do, but they’re also very keen to sell their wares – from servers to network infrastructure. Add the discussions about being at ‘the edge’, and you will find yourself in quite a jungle that’s arguably becoming much harder to navigate. In fact, talk about edge data centres and the digital core is likely to confound most customers. The IT industry is not the only culprit, most industries like to create terms that

32 | December 2017

seem quite meaningless to everyone but those working with them. Yet, ironically, terms are often created for marketing reasons to describe something that has existed for many years in another guise. So where does this leave you? Well, it requires you to either hire the right expertise to help you to cut down this jungle or for you to learn how to figure out what you really need. Data volumes are going to increase, and research on the web suggests that digital transformation and the Internet of Things (IoT) are playing a role in creating this trend. The list of potential data sources is constantly growing, and so you’ll need a data centre that can cope with this growth. The data centre of the future also needs to be able to manage the growing varieties of data.

Digital core banking Chris Skinner, co-founder of the Financial Services Club, writes in The Finanser that, “Banks without a digital core will fail.” But what does he mean? He says that data is the key to disruption, and most people will agree with his statement. However, he was asked how he defined the term. One reply was that there wasn’t one, but it seems that it refers to an idea whereby there is a central point of systems – such as a mainframe. He writes that the markets don’t operate in this way anymore, and so he thinks that systems should be spread across server farms in the cloud to avoid there being a single point of failure. However, he finds that many people are misinterpreting what the ‘digital core’ actually means.


That’s no surprise to me because new terms often mean different things to different people. He therefore describes the digital core as being, “the removal of all bank data into a single structured system in the cloud [where] the data is cleansed, integrated and provides a single, consistent view of the customer as a result.”

Empowering companies In 2015, technology giant SAP described the digital core as follows, “A digital core empowers companies with real-time visibility into all mission critical business processes and processes around customers, suppliers, workforce, Big Data and the Internet of Things. This integrated system enables business leaders to predict, simulate, plan and even anticipate future business outcomes in the digital economy.” Amongst other examples of where a digital core can be used, the company added, “The same digital core can be used to optimise manufacturing, moving from batch orders to real-time manufacturing resource planning to always meet demand. Further, using information collected by assets and the Internet of Things, the assembly line and ERP can be synchronised for increased cost efficiency and asset utilisation.” SAP believes that, “Companies that lose the complexity that is weighing them down will be able to face the market disruption happening everywhere.” To be successful requires your firm to have real-time visibility and integration across all business processes. Visibility is about allowing your business to understand how information flows in and out of your company, or perhaps even in and out of your data centre.

“The list of potential data sources is constantly growing, and so you’ll need a data centre that can cope with this growth.”

Edge computing

Question vendors

In contrast, edge computing is defined by Techarget in the following way, “Edge computing is a distributed information technology (IT) architecture in which client data is processed at the periphery of the network, as close to the originating source as possible. The move toward edge computing is driven by mobile computing, the decreasing cost of computer components and the sheer number of networked devices in the internet of things (IoT). Depending on the implementation, time-sensitive data in an edge computing architecture may be processed at the point of origin by an intelligent device or sent to an intermediary server located in close geographical proximity to the client. Data that is less time sensitive is sent to the cloud for historical analysis, big data analytics and long-term storage.” One of the arguments for edge data centres and edge computing generally, is that it can help to reduce the impact of network latency. However, this is problematic. What if the data is too close to the source, and what if a problem arises? Ideally data should be stored and backed up in at least three different data centres or disaster recovery sites. Storing data too close to a circle of disruption can lead to disaster, and to mitigate the effects of latency there is no need to place all of your eggs in one basket. With data acceleration solutions such as PORTrockIT, it becomes possible to mitigate the effects of data and network latency, speed up data flows and reduce packet loss at a distance.

So, edge computing may not provide you with all of the answers when it comes to creating the data centre of the future. Vendors will also be happy to sell you WAN optimisation as the answer, but it can’t handle encrypted data in the way that a data acceleration tool can without compromising network efficiency. How? Well, data acceleration solutions use machine learning to increase throughput. The alternative of increasing your bandwidth won’t necessarily improve your networks’ speed, but it could cost you the earth without addressing the limitations created by physics and the speed of light. It’s therefore important to consider what really is motivating a vendor. Your interests, or theirs? A vendor with your interests at heart will be able to explain to you in plain English, and demonstrate the veracity of their claims, what really will work effectively for you. While the jargon might be sexy, it’s no good putting customers like yourself on edge. Vendors should explain and demonstrate the benefits of a given technology in a language they understand. That is crucial. If we can’t communicate properly with each other, then it’s not going to be possible to create the data centre of the future – whether a digital core, edge computing or another approach is used as part of it. So the key to creating the future data centre is for vendors to offer you as a customer what they really need today for tomorrow’s future. This doesn’t require you to replace all of your legacy infrastructure for latest technology. Much can be achieved with what you already have, and with data acceleration. December 2017 | 33

Projects & Agreements selects Snowflake’s cloud data warehouse to empower data scientists Snowflake Computing has announced that has chosen Snowflake to scale and expedite its data science initiatives. With Snowflake, Overstock can fast-track highly impactful data science projects that allow it to deliver on its brand promise of using technology to help customers find what they want. Overstock joins fellow online retailers Rue La La and Rent the Runway in leveraging Snowflake’s modern cloud data warehouse solution. “A common meme in the data science world is that data scientists spend 80% of their time prepping data and 20% of their time building models, and we wanted to flip

that ratio,” said Joe Kambeitz, Overstock’s vice president of product and analytics. “A key part of that initiative was investing in the right tools for our data scientists, and Snowflake is that tool when it comes to allowing data scientists to rapidly scale and deploy their workloads.” Overstock uses Snowflake’s cloudbuilt data warehouse to easily load and integrate structured and semi-structured data in one place for quick and powerful analysis. Snowflake’s clone and timetravel capabilities allow Overstock’s

data scientists to independently and simultaneously work on their own golden data set. The platform’s elastic compute and linear scaling capabilities enable parallelism and performance, allowing Overstock’s data scientists to move faster than ever. Packaging complex features for new data science models now occurs within hours or days, compared to the weeks it could take before partnering with Snowflake. Snowflake,

Root Data Centre becomes first wholesale provider to implement AI and machine learning Root Data Centre has announced that it is the first wholesale data centre in the world to use Artificial Intelligence (AI) and machine learning to reduce the risk of data centre downtime. Root Data Centre has partnered with AI and machine learning technology firm Litbit, within Root’s Montréal-based facility. This unprecedented application of leading edge technology enhances Root’s Data Centre Infrastructure Management (DCIM) strategy, utilising deep datasets to monitor and manage the overall health and status of critical systems within its data centre. In addition to the crucial and constant monitoring by technicians, AI and machine learning capabilities help Root Data Centre to better anticipate and manage potential failures before they occur. This helps Root staff to avoid, rather than recover, from operational outages. “Reliability and uptime are key considerations for any data centre user, ranging from cloud service providers, to hosting companies, video game developers and other large-scale IT organisations,” explained AJ Byers, president and CEO, Root Data Centre.

34 | December 2017

“We’ve made 100% uptime a top priority, and working with Litbit, we’ve pioneered the next wave of machine learning within data centre operations to get us closer. Today, we’re proud to say that Root is championing the use of cutting-edge AI and machine learning technology to reduce the risk of downtime for customers of all sizes.” Root Data Centre,

Projects & Agreements

Aegis Data and vScaler expand partnership to deliver high performance cloud computing Private cloud specialist vScaler is significantly expanding the scope of its partnership with colocation data centre provider Aegis Data, to offer its customers superior connectivity and processing power and enable them to access the full potential of the modern cloud solution. Aegis Data’s facility Aegis One, will provide the cooling, power and necessary infrastructure to support vScaler’s advanced cloud technology. As part of this expansion, Aegis Data will be installing a secure private cage environment for vScaler within the Aegis One facility. In doing so, this reinforces Aegis Data’s position as a flexible data centre provider, that has the capacity and scalability to respond to any IT requirement of its customers. Glenn Rosenberg, director of cloud and managed services at vScaler, commented, “Developing our partnership with Aegis Data is a hugely important step for our business. Our primary focus is on offering our clients the support that they need to run their critical business operations in the cloud, and giving them the flexibility and scalability, as well as the performance, to develop their IT infrastructure and benefit from technology such as deep learning analytics and artificial intelligence.” Aegis, vScaler,

Hetzner Online deploys ADVA FSP 3000 CloudConnect in response to huge growth in data demand ADVA has announced that Hetzner Online has deployed its FSP 3000 CloudConnect with QuadFlex technology to meet soaring data demand from private customers and enterprise clients. Hetzner Online is using the data centre interconnect (DCI) solution to boost the capacity of its national backbone network to 400Gbit/s and beyond. The upgrade to Hetzner Online‘s legacy infrastructure involved the deployment of pairs of 200Gbit/s connections operating at 16QAM. These dual wavelengths create 400Gbit/s links between hosting and colocation facilities in Frankfurt and Nuremberg as well as the company’s data centre park in Falkenstein and other key sites. Installation was carried out by ADVA’s Elite partner Axians. “We’re seeing fierce data growth from private and business customers across the country. Deploying 400Gbit/s technology in our transport network is a major part of our response,” said Martin Fritzsche, head of network engineering, Hetzner Online. Using the ADVA FSP 3000 CloudConnect and its QuadFlex technology, the new infrastructure is now transporting 400Gbit/s data loads over distances stretching to 250km without the need for signal regeneration. To achieve this channel capacity, the network is configured with two 200Gbit/s wavelengths operating at 16QAM modulation within an optical super-channel. Multiple 400Gbit/s cards are also being run in parallel for even greater speeds.

Agile aquires Gannett operations centre, data centre and PDNS Agile has announced that it has completed the acquisition of Gannett’s primary data centre located in Silver Spring, Maryland. Gannett’s private digital network connecting this data centre and company headquarters in McLean, Virginia, to carrier neutral facilities, as well as private digital network services (PDNS), LLC, the partner firm that assisted in building and maintaining Gannett’s private digital network. Agile has begun to integrate its legacy, multi-site data centre colocation business with these new private digital network capabilities. In addition, Agile now operates Gannett’s private digital network and provides data centre and network services to Gannett under a longterm master services agreement. “We look forward to continuing our working relationship based on a long track record of strong performance from what is now the Agile team,” commented Jack Mundie, VP enterprise computing and risk management, Gannett Technology. “We are honoured and proud to now serve as an important component of the backbone for Gannett’s online content distribution and look forward to expanding our offerings for current and future AGILE clients,” commented Jeffrey Plank, president and CEO, Agile. Agile,


December 2017 | 35

Projects & Agreements

Megaport Achieves AWS networking competency status Megaport has announced that it has achieved Amazon Web Services (AWS) Networking Competency status. This designation recognises that Megaport provides industry-leading Software Defined Networking (SDN) technology across over 179 locations to help customers adopt, develop, and deploy networks on AWS. Achieving the AWS Networking Competency differentiates Megaport as an AWS Partner Network (APN) member that provides specialised demonstrated technical proficiency and proven customer success with specific focus on networking, based on scalable, secure, direct connectivity. To receive the designation, APN Partners must possess deep AWS expertise and deliver solutions seamlessly on AWS. Vincent English, chief executive officer, Megaport said, “Our SDN helps enterprises rapidly connect to AWS Direct Connect. The vast reach of Megaport’s global network footprint increases accessibility to AWS services by enabling AWS Direct Connect in new regions and empowers enterprises with a consumption-based networking model. We’re honoured to have achieved AWS Networking Competency status as we continue to develop features to make it even easier to get our customers connected.”

With Megaport, enterprises can connect to multiple AWS regions, all over the globe, from a single interconnection point, meaning that they can leverage multi-region at a fraction of the usual cost. Megaport,

EdgeConneX Joins Dutch Data Centre Association EdgeConneX, specialising in global data centre solutions at the edge of the network, has become a member of the Dutch Data Centre Association (DDA). The DDA is a trade organisation of leading data centres in the Netherlands whose mission is to strengthen the economic growth and enhance the profile of the data centre sector to government, media and society.“With the rise of the Internet of Things and 5G, EdgeConneX edge data centres will become more important than ever before,” commented Stijn Grove, director, Dutch Data Centre Association. “EdgeConneX is an international key player in this area, and I firmly believe it can strongly contribute to the Dutch data centre sector and our position as the Digital Gateway to Europe. Together, we can make the Netherlands an even more powerful data centre hub.” The Netherlands is home to one of the most advanced markets for data centre operations in Europe, and has

36 | December 2017

the world’s second-highest penetration of household broadband connections in the continent. Approximately one-third of European data centres are located in the Amsterdam

metro area. In addition to future expansion in Amsterdam and Dublin, EdgeConneX has a further 10 regions under consideration across Europe. EdgeConneX,

Projects & Agreements

Lenovo hardware chosen by ITPS to power the world’s first Microsoft Azure Stack deployments for true hybrid cloud Lenovo has announced the deployment of the world’s first available Microsoft Azure Stack appliance by UK IT services firm ITPS, housed on a Lenovo ThinkAgile and deployed in ITPS’ flagship, state-of-the-art data centre. Azure Stack is the first hybrid cloud solution acting as an extension to public cloud, giving end-users a single management interface across both on-premises and Azure public environments. The installation marks the global arrival of true hybrid cloud, being the only service of its kind currently on the market. This deployment will allow ITPS to offer cloud functionality within its data centre, while giving users the security and regulatory requirements that only on-premises deployments can provide. ITPS has deployed the solution as part of the Microsoft Early Adoption Initiative (EAI), choosing Lenovo’s ThinkAgile SX over two other Microsoft Azure hardware partners. ITPS will focus on meeting the demands of customers that wish to use an on-premises environment for critical applications, while using a public Azure option for small I/O workloads and other tasks. Paul Anderson, operations director at ITPS commented, “A large proportion of IT managers are still reluctant to move to public cloud due to confusion over the location of their data. With GDPR set to come in next year, this concern is growing. Azure Stack overcomes the problem by providing a consistent experience between public and private for a true hybrid experience.” Lenovo,

Pulsant delivers private cloud solution to NEBOSH​ Pulsant has announced a partnership with NEBOSH (the National Examination Board in Occupational Safety and Health). Due to the increasingly global nature of its business and the desire to encourage a modern, remote-working culture, NEBOSH needed to ensure its systems were available 24/7, and this partnership will allow them to do so. Formed in 1979, NEBOSH offers a comprehensive range of globally-recognised qualifications designed to meet the health, safety and environmental management needs of all places of work. “We have experienced sustained growth to become a business that now employs over 100 people,” explains Derek Eaton, head of IT at NEBOSH. “Today, we have people logging on to our systems from all over the world and at all times of the day. We therefore required a resilient infrastructure foundation to be able to handle these enquiries 24/7. We have a small IT team, and our time needs to be spent concentrating on the application side instead of the infrastructure side. This is why we needed the help of a business partner.” NEBOSH has moved its core infrastructure into a high availability private cloud in one of Pulsant’s purpose-built data centres, that includes disaster recovery as a service (DRaaS). Moving to a private cloud-based solution not only ensures that NEBOSH’s remote workers are able to access all their files quickly, but also provides a highly flexible and scalable solution that enables it to add or remove infrastructure as the needs of the business change.

atom86 datacentre and Asperitas pass the six-month mark of Immersed Computing collaboration Asperitas announced last June to open a showcase location at the atom86 data centre in SchipholRijk, close to the Amsterdam Schiphol Airport. A single full-blown version of an AIC24 system has been integrated within the facility and existing power and cooling infrastructure and has been used by the parties for piloting and showcasing for six months now. In these six months, many interested have experienced the Immersed Computing module in this data centre. The feedback of the atom86 data centre operators has been very positive and the data centre is fully open to facilitate the total liquid cooling based technology for customers. The showcase unit is also being used as a proof of concept (PoC) system for tailor-made hardware configurations of Asperitas customers. Currently, you will be able to see an immersed cassette with Open Compute Project (OCP) servers in the AIC24 module at atom86. This OCP pilot unit, provided by hardware manufacturer Wiwynn through a partnership with Dutch OCP distributor Circle B, has been immersed for the first time during the international high performance computing conference ISC 2017 in Frankfurt. Asperitas, atom86,


December 2017 | 37

Projects & Agreements

Virtual Power Systems and Schneider Electric collaborate to improve data centre power systems efficiencies and operating ROI Virtual Power Systems has announced that its Software Defined Power solution, ICE, has been implemented in the latest firmware update of Schneider Electric’s Galaxy VM UPS. Schneider Electric and Virtual Power Systems have collaborated to bring power efficiency improvements and to increase the density of IT systems in the data centre. Together, both organisations are improving the return on investment for data centre operators worldwide. Equipped with Virtual Power Systems’ ICE solution, the Schneider Galaxy VM UPS delivers a peak load (peak shaving) capability to any data centre. Traditionally, data centre operators had to implement a centre’s power infrastructure to accommodate peak load conditions even though these peaks might only be required periodically. This resulted in a very inefficient utilisation of the available energy. With peak shaving, power that is stored in the mostly unused batteries of the Galaxy VM UPS provide the power to meet peak load conditions. Using this approach, data centre operators can design their power infrastructure to meet normal operating loads, thus helping to achieve a higher equipment density and capacity in addition to increasing revenues and ROI. Virtual Power Systems, www. 38 | December 2017

Computacenter supports Liberty Global in enriching customer experience with consolidated private cloud Computacenter has announced that it has migrated Liberty Global’s operations onto a new private cloud environment using VMware technology. As one of the world’s largest international cable companies offering quad play services, Liberty Global must constantly introduce new features to keep up with customer demand. With services delivered across 12 European countries, and a portfolio of brands including Virgin Media, Telenet and UPC, the company needed to unite its operations to maximise agility. A traditional data centre infrastructure no longer offered the flexibility Liberty Global needed. Computacenter’s tailor-made private cloud environment was chosen by Liberty Global to help improve customer experience, offer a greater competitive advantage, and reduce overall business costs, among other benefits. Liberty Global’s cloud new environment comprises around 1,000 virtual servers and supports business-critical systems, such as the company’s backend television platform and user identity solution for 25 million European customers. “The benefits of having a private cloud is we get to increase the capacity much easier than ever before. Working with a flexible capacity model, we’re able to reduce the footprint inside the data centre through having composable architectures,” explained Colin Miles, European VP of data centre technology, Liberty Global. “We have been able to save a lot of cost, and started to break down some of the historical barriers we had to delivery and moving products much quicker to market.” Computacenter,

Nlyte Software Improves reseller network in EMEA with Improved IT Nlyte Software has announced a partnership with Improved IT, the IT infrastructure management reseller. This means organisations throughout EMEA will have a direct line to what Nlyte describes as industry leading data centre infrastructure management (DCIM). Headquartered in Rotterdam, Improved IT – part of the HTL group – provides trusted IT infrastructure management solutions including security, network and data centre infrastructure for the public, industrial, health, finance, transport and education sectors. The new global partnership will see Improved IT working with Nlyte Software to distribute Nlyte Software’s suite of DCIM tools. This opens up opportunities for the market to incorporate next generation solutions into their existing infrastructure that will improve operational efficiency, provide more accurate and timely information and identify potential cyber security vulnerabilities. With Improved IT and Nlyte Software customers across the region will benefit from proven industry solutions that will give them the reliability and robustness they require to manage their assets to their maximum operational efficiency. As a new partner in EMEA, Improved IT will provide not only Nlyte Software solutions to its wide customer base, but also strategy, advice and ongoing support following consultation and integration. Nlyte,

Projects & Agreements

Asteroid and Interxion collaborate to enable efficient, cost effective interconnection to networks in Amsterdam Asteroid and Interxion have collaborated to enable efficient, cost effective interconnection to networks in Amsterdam, the Netherlands. Asteroid operates the Asteroid IXP in the Science Park in Amsterdam, which is easily accessible by all Interxion customers in Amsterdam. The Asteroid IXP offers a modern, cost effective way to peer with some of the largest networks in the world. The collaboration will not only enable effective and smooth connection of all Interxion Amsterdam customers to the Asteroid IXP, but is also kicked off with a special offer. The Asteroid sponsored offer allows anyone present at Interxion Science Park to obtain a free cross connect from Interxion Science Park (AMS9) to Asteroid IXP for a period of 12 months.

“We are very excited to work with Interxion to provide better interconnection to our customers in Amsterdam,” says Nurani Nimpuno, chief commercial officer, Asteroid. “Asteroid works with trusted parties in local markets to deliver the best

possible local interconnection solution. Our collaboration with Interxion is a good example of this, which benefits all networks in Amsterdam.” Asteroid, Interxion,

Moneycorp chooses Datapipe to expand infrastructure globally Datapipe has been chosen by Moneycorp, the independent foreign currency exchange broker, to expand the company’s IT infrastructure to its newly acquired international offices. Moneycorp offers services for personal international transfers, corporate international transfers, and travel money. The currency brokerage completes more than 9.2 million customer transactions and trades £13 billion in foreign currency every year. Originally partnering with Datapipe in 2011, the company’s infrastructure was struggling to keep up with the business’s rapid

growth. Datapipe provided Moneycorp with elastic expansion capabilities and transitioned the company from a capital expenditure (CapEx) to an operational expenditure (OpEx) model. By providing a transparent view of infrastructure performance and reducing overall IT risk and recovery time objective (RTO), Datapipe’s changes freed up Moneycorp’s IT team from reactive daily infrastructure maintenance. Recently, Moneycorp’s IT infrastructure again needed to accommodate for expansion as the company embarked on a number of foreign acquisitions. Moneycorp opened up a number of new offices around the world, and Datapipe connected these remote offices directly to the Datapipe managed infrastructure. Moneycorp also commenced a strategic telephony project to implement a new unified communications system. This project leveraged Moneycorp’s infrastructure and supported its expansion. Kenneth Byrne, head of IT services at Moneycorp, said, “Having an international partner such as Datapipe has been a considerable benefit to us. Datapipe has allowed us to expand our infrastructure internationally and supported our growth beyond the UK.” Moneycorp, Datapipe,

December 2017 | 39


Verne Global: hpcDIRECT an industrial-scale HPC as a service platform Verne Global has announced the launch of its new hpcDIRECT service. The company describes hpcDIRECT as a powerful, agile and efficient HPC-as-a-service (HPCaaS) platform, purpose-built to address the intense compute requirements of today’s data driven industries. hpcDIRECT provides a fully scalable, bare metal service with the ability to rapidly provision the full performance of HPC servers uncontended and in a secure manner. “hpcDIRECT has been designed and built from the outset by HPC specialists for HPC applications. Whether deployed as stand-alone compute or used as a bare metal extension to existing in-house HPC infrastructure, hpcDIRECT provides an industry-leading solution that combines best-in-class design with our HPC optimised, low cost environment and location,” said Dominic Ward, managing director at Verne Global. hpcDIRECT is accessible via a range of options, from incremental additions to augment existing high performance computing, to supporting massive processing requirements with petaflops of compute. This flexibility makes it a solution for applications such as computer-aided engineering, genomic sequencing, molecular modelling, grid computing, artificial intelligence and machine learning. hpcDIRECT is available with no upfront charges and can be provisioned rapidly to the size and configuration needed. hpcDIRECT clusters are built using the latest architectures available including Intel’s Xeon (Skylake) processors, and fast intercore connectivity using Mellanox Infiniband and Ethernet networks, with storage and memory options to suit each customer’s needs. Verne Global began operations in early 2012, bringing new, innovative thinking and infrastructure to the data centre industry. hpcDIRECT is the latest product in this cycle, and is optimised for companies operating high performance and intensive computing across the world’s most advanced industries. Verne Global,

AEG Power Solutions announces Protect Flex, a new modular industrial-grade UPS AEG Power Solutions has announced Protect Flex, its industrial grade fully modular UPS. The system has a robust design suitable for demanding environments and is the only one in its category to be configurable to all electrical system schemes with the benefits of power modularity. Protect Flex features a wide range of configurable options and a high level of scalability and flexibility The Protect Flex is designed to meet an increasing demand from industries to secure power supply with an equipment which has a compact footprint yet still offers a high level of reliability and resist to unprotected environments. AEG Power Solutions answers with a new concept of UPS systems that combines a modular architecture based on 10 and 15kVA/kW, hot-swappable power modules, transformer-less and

40 | December 2017

IGBT based, with a customisable set of options and provides N+1 inbuilt power redundancy to maximise reliability. Protect Flex is a simplified, flexible and cost effective UPS which can cope effectively with harsh environmental conditions, with an IP43 rating protection. The system is designed to maximise savings in terms of footprint and power installed (kVA). Its electrical and mechanical design, cabling and protection devices, are engineered to maximise security and simplify maintenance operations. High quality UPS, benefiting from the long engineering experience of AEG PS for rugged systems, Protect Flex achieves high reliability with optimum availability. Up time availability is maximised by high quality equipment and design with the Mean Time To Repair (MTTR)

minimised, thanks to its modular and hot-swappable architecture. The scalable architecture of the UPS reduces CAPEX and optimises OPEX costs. The power modules are based on the latest IGBT double conversion technology, with a low input THDi and input power factor close to one, even when a low percentage of load is applied. AEG Power Solutions,


MPower UPS and FEC Heliports Equipment work together to promote helipad safety MPower UPS, part of the Centiel Group, has announced it is working with FEC Heliports Equipment marketing a UPS system to help protect pilots. FEC Heliports Equipment is a complete one-stop-shop, supplying equipment for every type of helipad installation and now as a reseller, will offer the Flexipower single phase UPS 5KVA, designed and built to protect electronic equipment from power fluctuations. Fraser MacKay, commercial director, FEC Heliports Equipment commented, “Critical to the design of any helipad is the need for an aircraft to land and take off safely. The Civil Aviation Authority recommends that heliport lighting is fed from a UPS so that, in the event of a power failure, the lighting system continues to function without interruption. “An autonomous power supply also ensures that other essential systems,

such as VHF pilot activated lighting, remain online even if the mains power supply is compromised. By partnering with MPower and Centiel to become a reseller for the Flexipower single phase UPS, we are now able to offer our clients a best-in-class solution for power protection requirements.” Michael Brooks, managing director, MPower added, “FEC Heliports Equipment supplies a fully flexible helipad installation service, from

offshore oil and gas to onshore hospitals and heliports, plus portable solutions that can be in used in any location, working with both independent operators and some of the largest multinational companies in the world. “We are delighted to be working with FEC Heliports Equipment to supply UPS as an option to ensure a reliable, clean and stable power supply for helipads around the globe.” MPower UPS,

STULZ CyberCool Indoor generates a cooling capacity of up to 100kW with a very small footprint Stulz has added chillers with a stepless capacity control (EC compressor) and an output from 20 to 100kW to its portfolio. Chillers from the CyberCool Indoor series have been designed for indoor installation. The new version was specially engineered to precisely adapt to low or fluctuating heat loads. With a maximised temperature range from four to 18°C at the chilled water outlet, the CyberCool Indoor is suitable for a broad range of applications. In the data centre, medical technology or the process industry, it allows planners maximum freedom for individual project specifications and customer requirements. Chillers from Stulz’s CyberCool Indoor series are available with a choice of three systems: Air-cooled with external condenser, watercooled, or the TCO-leader with indirect free cooling and intelligent mode switching (compressor cooling, free cooling or mixed mode). All three systems can be standardly equipped with on/off compressors or with new energy saving EC compressors. These EC versions contain two complete, redundant refrigerant circuits and ensure high efficiency in partial load. The system reacts quickly to dynamic load fluctuations avoiding on/off cycling to guarantee an extended equipment life. Stulz,

December 2017 | 41

final thought

What Lies Ahead Giordano Albertazzi, president of Vertiv in Europe, anticipates the advent of the Gen 4 data centre in a look ahead to 2018 trends.


he next-generation data centre will exist beyond walls, seamlessly integrating core facilities with a more intelligent, mission-critical edge of network. These Gen 4 data centres are emerging and will become the model for IT networks of the 2020s. The advent of this edgedependent data centre is one of five 2018 data centre trends identified by a global panel of experts from Vertiv, formerly Emerson Network Power. “Rising data volumes, fuelled largely by connected devices, has caused businesses to re-evaluate their IT infrastructures to meet increasing consumer demands,”

said Giordano Albertazzi, president of Vertiv in Europe, Middle East and Africa. “Although there are a number of directions companies can take to support this rise, many IT leaders are opting to move their facilities closer to the end-user – or to the edge. Whatever approach businesses take, speed and consistency of service delivered throughout this phase will become the most attractive offering for consumers.” Previous Vertiv forecasts identified trends tied to the cloud, integrated systems, infrastructure security and more. Below are five trends expected to impact the data centre ecosystem in 2018:

Emergence of the Gen 4 data centre Whether traditional IT closets or 1,500ft2 micro-data centres, organisations are increasingly relying on the edge. The Gen 4 data centre holistically and harmoniously integrates edge and core, elevating these new architectures beyond simple distributed networks. This is happening with innovative architectures delivering near real-time capacity in scalable, economical modules that leverage optimised thermal solutions, high-density power supplies, lithium-ion batteries, and advanced power distribution units. Advanced monitoring and management technologies pull it all together, allowing hundreds or even thousands of distributed IT nodes to operate in concert to reduce latency and up-front costs, increase utilisation rates, remove complexity, and allow organisations to add network-connected IT capacity when and where they need it.

Cloud providers go colo Cloud adoption is happening so fast that in many cases cloud providers can’t keep up with capacity demands. In reality, some would rather not try. They would prefer to focus on service delivery and other priorities over new data centre builds, and will turn to colocation providers to meet their capacity demands. With their focus on efficiency and scalability, colos can meet demand quickly while driving costs 42 | December 2017

final thought

downward. The proliferation of colocation facilities also allows cloud providers to choose colo partners in locations that match end-user demand, where they can operate as edge facilities. Colos are responding by provisioning portions of their data centres for cloud services or providing entire build-to-suit facilities.

Reconfiguring the data centre’s middle class It’s no secret that the greatest areas of growth in the data centre market are in hyperscale facilities – typically cloud or colocation providers – and at the edge of the network. With the growth in colo and cloud resources, traditional data centre operators now have the opportunity to reimagine and reconfigure their facilities and resources that remain critical to local operations. Organisations with multiple data centres will continue to consolidate their internal IT resources, likely transitioning what they can to the cloud or colos while downsizing and leveraging rapid deployment configurations that can scale quickly. These new facilities will be smaller, but more efficient and secure, with high availability – consistent with the mission-critical nature of the data these organisations seek to protect. In parts of the world where cloud and colo adoption is slower, hybrid cloud architectures are the expected next step, marrying more secure owned IT resources with a private or public cloud in the interest of lowering costs and managing risk.

High-density (finally) arrives The data centre community has been predicting a spike in rack power densities for a decade, but those increases








have been incremental at best. That’s changing. While densities under 10kW per rack remain the norm, deployments at 15kW are not uncommon in hyperscale facilities – and some are inching toward 25kW. Why now? The introduction and widespread adoption of hyperconverged computing systems is the chief driver. Colos, of course, put a premium on space in their facilities, and high rack densities can mean higher revenues. And the energy saving advances in server and chip technologies can only delay the inevitability of high-density for so long. There are reasons to believe, however, that a mainstream move toward higher densities may look more like a slow march than a sprint. Significantly higher densities can fundamentally change a data centre’s form factor – from the power infrastructure to the way organisations cool higher density environments. High-density is coming, but likely later in 2018 and beyond.

The world reacts to the edge

“Rising data volumes has caused businesses to re-evaluate their IT infrastructures to meet increasing consumer demands.”

As more and more businesses shift computing to the edge of their networks, critical evaluation of the facilities housing these edge resources and the security and ownership of the data contained there is needed. This includes the physical and mechanical design, construction and security of edge facilities as well as complicated questions related to data ownership. Governments and regulatory bodies around the world increasingly will be challenged to consider and act on these issues. Moving data around the world to the cloud or a core facility and back for analysis is too slow and cumbersome, so more and more data clusters and analytical capabilities sit on the edge – an edge that resides in different cities, states or countries than the home business. Who owns that data, and what are they allowed to do with it? Debate is ongoing, but 2018 will see those discussions advance toward action and answers. December 2017 | 43

Data Centre News is a new digital news based title for data centre managers and IT professionals. In this rapidly evolving sector it’s vital that data centre professionals keep on top of the latest news, trends and solutions – from cooling to cloud computing, security to storage, DCN covers every aspect of the modern data centre. The next issue will include special features examining storage, servers and hardware and GPDR in the data centre environment. The issue will also feature the latest news stories from around the world plus high profile case studies and comment from industry experts. REGISTER NOW to have your free edition delivered straight to your inbox each month or read the latest edition online now at‌

DCN December 2017  
DCN December 2017  

Special feature: UPS Michael Brooks of MPower UPS walks us through the benefits of a modular UPS system and the steps you can take to ensur...