data centre news
TU P IN FO AGRN FO R E TO RMMO 12 AT RE IO N
Garner Mobile Destruction Unit with IRONCLAD Ensure no data is left behind by:
Maintaining chain-of-custody by rolling the data destruction cart to the media.
Erasing hard drives to NSA Standards with the TS-1E degausser.
Physically destroying hard drives to NSA Standards with the PD-5E hard disk destroyer.
Physically destroying solid-state memory with the SSD-1E.
Tracking, documenting and generating an IRONCLAD hard drive erasure log with images to withstand scrutiny of a security audit.
inside... Special Feature Software and Applications
Final Thought Show Review
Highlights from Data Centre World 2018
Justin Day of 6point6, discusses the top five misconceptions when it comes to cloud migration
in this issue… March 2018
data centre news
Garner Mobile Destruction Unit with IRONCLAD Ensure no data is left behind by:
Maintaining chain-of-custody by rolling the data destruction cart to the media.
Erasing hard drives to NSA Standards with the TS-1E degausser.
Physically destroying hard drives to NSA Standards with the PD-5E hard disk destroyer.
Physically destroying solid-state memory with the SSD-1E.
Tracking, documenting and generating an IRONCLAD hard drive erasure log with images to withstand scrutiny of a security audit.
Special Feature Software and Applications
40 Data Centre Efficiency
Good old British Summer Time.
David Trossell of Bridgeworks examines the various metrics that should be considered when measuring data centre performance and efficiency.
07 Industry News A new study from PagerDuty reveals we’re working our IT professionals too hard.
14 Centre of Attention Dennis O’Sullivan of Eaton examines how to run smart cities sustainably.
16 Meet Me Room Stefan Hoelzl of Exasol, discusses horses, childhood dreams, life lessons and, of course, gives us his industry insight.
18 Show Review Highlights from DCW 2018.
34 Data Centre Location Will Heigham of Bidwells reveals the top UK data centre hotspots and discusses why choosing the right location should remain a priority.
38 Modular Design Gordon Hutchison of Geist explains why modular design and pre-fabrication within the data centre don’t have to be separate entities.
T P U IN FO AGRN FO R E TO RMMO 12 AT RE IO N
42 UPS Data centre operators could be using their power infrastructure to generate revenue – without sacrificing disaster recovery processes, according to Richard Clifford of Keysource.
44 Projects and Agreements Pure Technology Group has been helping the RSPB to spread its wings.
50 Company Showcase Aermec’s new modular chiller offers the industry a new way to cool.
52 Final Thought Justin Day of 6point6 discusses the top five misconceptions when it comes to cloud migration.
Highlights from Data Centre World 2018
Justin Day of 6point6, discusses the top five misconceptions when it comes to cloud migration
SPECIAL FEATURE: software & applications 24 Dirk Marichal of Avi Networks explains why the modern digital business needs to become more applicationcentric in its approach to IT.
26 In today’s ‘right now’ economy, Martin James of DataStax discusses what to consider when building the next generation of applications.
28 Buying software and cloud
services is complicated. Vincent Smyth of Flexera looks at how visibility into licensing can bring you more bang for your buck.
30 Zahl Limbuwala of Romonet discusses why data centres need smart analytics and accurate data to increase energy efficiency.
32 Vishal Rai of Acellere explains why perfecting software development is essential to our digital lives.
March 2018 | 3
data centre news
Editor Claire Fletcher email@example.com
Sales Director Ian Kitchener – 01634 673163 firstname.lastname@example.org
Studio Manager Ben Bristow – 01634 673163 email@example.com
Designer Jon Appleton firstname.lastname@example.org
Business Support Administrator Carol Gylby – 01634 673163 email@example.com
Managing Director David Kitchener – 01634 673163 firstname.lastname@example.org
Accounts 01634 673163 email@example.com
Suite 14, 6-8 Revenge Road, Lordswood, Kent ME5 8UD T: +44 (0)1634 673163 F: +44 (0)1634 673173
The editor and publishers do not necessarily agree with the views expressed by contributors, nor do they accept responsibility for any errors in the transmission of the subject matter in this publication. In all matters the editor’s decision is final. Editorial contributions to DCN are welcomed, and the editor reserves the right to alter or abridge text prior to publication. © Copyright 2018. All rights reserved.
4 | March 2018
The calendar on my computer tells me that British Summer Time began this month, and as I look out my window, laughing hysterically at the remnants of yet more snow in March, I cannot help but think we are being lied to. It seems this global warming business is doing quite the opposite, with the ‘Beast from the East’ bringing the UK to its proverbial knees just a few weeks ago. The icy conditions literally caused chaos, with travel disruptions, school and hospital closures, and even deaths occurring up and down the country. But if you have windows (or Facebook) you will be well aware of this. Many of us were told to stay inside, and that’s exactly what we did. But if you were one of the heroes braving the storm to help keep things moving out there, we can’t thank you enough. Although the recent weather was nothing short of dire, I hasten to add that when I was a child, during adverse weather conditions, we were just told to put on our big coats. No ‘snow days’ for us in Newcastle, let me tell you. Joking aside, the sad fact is that some of the blame falls onto us. The population is growing at an
Claire Fletcher, editor
exponential rate, and by 2050 it is predicted that 66% of the world’s population will be concentrated in urban areas. Given the environmental impact of cities in general, the way they are currently run just isn’t sustainable and if something doesn’t change, things are only set to get worse. The future of our planet may well rely on advances in technology, and in this month’s Centre of Attention, Eaton discusses how we can implement new technologies and digital processes to create smart, sustainable cities. In this issue, we’ve also gone back to basics, examining the fundamentals on how best to optimise your data centre. We have commentary from industry experts on data centre location, the best metrics for measuring efficiency, the benefits of pre-fabrication and modular design, as well as how you can generate revenue via your power infrastructure without affecting your disaster recovery processes. Should you have any questions or opinions on the topics discussed please write to: claire.fletcher@ allthingsmedialtd.com.
21-22 March 2018 Excel London
at the VIP Lounge
The Global Leader in Technical Education for the Digital Infrastructure Industry Designing and delivering technical education programs for the data centre and network infrastructure sectors across the world
300 education programs delivered each year
education available in
countries across the globe
www.cnet-training.com UK Headquarters: +44 (0)1284 767100 firstname.lastname@example.org
city locations each year
1 - 2 May 2018
Emirates Old Trafford, Manchester
SAVE T HE
FREE T O ATTE
Recognising the need – Reflecting the market This year we have had announcements of new builds in Birmingham, Hull, Manchester, Leeds and Liverpool… And this growth is set to continue. DataCentres North is the largest and most complete event outside London addressing the needs of all those involved in the ownership, design, build, management, operation and infrastructure needed to deliver effective datacentres, server and comms rooms.
Featuring the country’s leading suppliers this is your opportunity to see, discuss and source the latest in products, services, solutions that can benefit your business and assist you to achieve your goals.
The Conference - Content is Key
Sample of Speakers Mark Acton - Head of Datacentre Technical Consulting & BCS - DCSG, CBRE - DCS
DataCentres North Conference addresses both the Strategic and Operational issues affecting the region, its growth as well as the challenges facing those involved in operating effective, efficient, secure and resilient datacentres, server and comms rooms.
Professor Ian Bitterlin MD, Critical Facilities Consulting & BCS - DGSG, Council Member
The programme will address: • Datacentre Design • Energy & Sustainability • Direct Liquid Cooling • DCIM • Legislation • SLA’s and Governance • Financing • Regional Developments • Standards • Power • Connectivity
Fergus Innes - MD, Ireland France Subsea Cable Steve Bowes-Phipps - Senior Data Centre Consultant, PTS Consulting Emma Fryer - Associate Director, techUK
Social Networking Dinner DataCentres North will once again hold a social networking dinner on the first evening of the event. All tickets include a drink on arrival, 3 course meal and half a bottle of wine. Tickets are priced at £55 + VAT per person. Table of 10 priced at £495 + VAT (which includes a 10% discount)
For the latest information visit www.datacentresnorth.com or contact the DataCentres Team: 01892 518877 or email: email@example.com
Register online now: www.datacentresnorth.com Supported By :
Organised in Association with:
Cloud traffic to represent 95% of total data centre traffic by 2021 Cisco has released the seventh annual Cisco Global Cloud Index (2016-2021). The updated report focuses on data centre virtualisation and cloud computing, which have become fundamental elements in transforming how many business and consumer network services are delivered.
Businesses leaving themselves open to cyber vulnerabilities, with demand for IT Security staff down by 5% Demand for new IT Security skills has dropped by 5% in the past year (from Q4 2016 to Q4 2017), according to the latest Tech Cities Job Watch report from Experis. The report showed that despite a 24% year-on-year increase in the demand for contractors, this was outweighed by a 10% decrease in demand for the larger market of permanent IT security staff, during the same period. The quarterly report tracks IT jobs and salaries advertised within five technology disciplines (Big Data, cloud, IT security, mobile, and web development), and across 10 UK cities. Permanent IT security salaries rose by 4% this year, indicating that despite the demand drop, businesses are still willing to pay a premium for more specialist security professionals. However, the average salary for a cyber security role (£60,004) remains much lower than the likes of a Big Data specialist (£70,945). Martin Ewings, director of specialist markets, Experis UK & Ireland, commented, “These figures paint a complex picture of the cyber security landscape. While hacks are on the rise, the slowing demand for permanent IT security staff indicates that businesses are focusing on upskilling current employees to ensure that they have the skills needed. The Internet of Things (IoT) has transformed the way that companies across every industry work; and cyber security is now everyone’s responsibility – not just the IT department.” Experis, www.experis.co.uk
Strong multi-cloud traffic growth projected The study forecasts global cloud data centre traffic to reach 19.5 zettabytes (ZB) per year by 2021, up from 6.0ZB per year in 2016 (3.3-fold growth or a 27% compound annual growth rate [CAGR] from 2016 to 2021). Globally, cloud data centre traffic will represent 95% of total data centre traffic by 2021, compared to 88% in 2016. Improved security and IoT fuel cloud growth Security innovations coupled with tangible cloud computing benefits, including scalability and economies of scale, play key roles in fueling the cloud growth projected in the study. Additionally, the growth of Internet of Things (IoT) applications such as smart cars, smart cities, connected health and digital utilities requires scalable computing and storage solutions to accommodate new and expanding data centre demands. By 2021, Cisco expects IoT connections to reach 13.7 billion, up from 5.8 billion in 2016. Hyperscale data centres doubling The increasing need for data centre and cloud resources has led to the development of large-scale public cloud data centres called hyperscale data centres. In this year’s forecast, it is expected that by 2021 there will be 628 hyperscale data centres globally, compared to 338 in 2016, 1.9-fold growth or near doubling over the forecast period. Cisco, www.cisco.com
March 2018 | 7
Security obsession risks GDPR compliance for UK business Security concerns are twice as likely to drive cloud strategy than even a business’ core objectives, according to Calligo. Even regulatory compliance and data privacy – the strategic themes of doing business in 2018 – receive a similarly low ranking. Whereas security is the chief driver behind cloud strategy for 34% of 200 UK IT decisionmaker respondents, the business’ core objectives, compliance and data privacy are each only the top consideration for 17%. This is despite the imminent implementation date of the European General Data Protection Regulation (GDPR) – May 25 2018. “Driven by media-fuelled fears of severe fines and reputational damage, IT leaders have overcompensated in their cloud strategies and become almost myopically focused on security,” said Julian Box, CEO, Calligo. “This is to the enormous detriment of more strategic aims such as supporting the business’ objectives, and vital compliance with the GDPR’s data privacy requirements.” “The great irony is that while these organisations fear and mitigate the consequences of a security breach, the consequences of regulatory noncompliance are identical – and yet they are not being defended against,” Julian continued. “This probably stems from a mistaken belief within the IT industry that their role in GDPR adherence is centred on data security, leading organisations into compliance complacency and all kinds of non-compliant behaviour. They are effectively erecting walls around data they are not entitled to hold.” Calligo, https://calligo.cloud
8 | March 2018
63% of C-suite more concerned about costs of a cybersecurity breach than losing customers For UK senior executives who admit their organisations have suffered at least one significant cybersecurity breach within the past two years, the associated costs of a breach are considered the most important consequence. This is according to a new study by Centrify. Nearly two-thirds (63%) of respondents in the UK believe investigation, remediation and legal costs are the most important consequence of a breach, followed by disruption to operations (47%) and loss of intellectual property (32%). They showed less concern for impact on brand, including loss of customers (16%) and damage to the company’s reputation (11%). The study of 800 senior level executives, including CEOs, technical officers and CFOs in the UK and US, also indicates that there is confusion among the C-suite about what constitutes a cybersecurity risk and what needs to be done to prevent it. In the UK, malware is seen as the biggest threat to an organisation’s success among 44% of respondents, compared to just 24% who point to default/weak or stolen passwords and 29% who blame privileged user identity attacks. However according to the report, just 11% admit their breach was due to malware, while almost twice as many put it down to either a privileged user identity attack or the result of stolen or weak passwords (both 21%). As a result, 63% admit that privileged identity and access management would have most likely prevented the breach. Centrify, www.centrify.com
Leading data centre expert contributes to new parliamentary report on the future of energy A University of East London (UEL) expert in software and energy distribution has contributed to a new report launched in Parliament, which calls on the government and the public sector to shore up the nation’s expertise and lead the way in the area of energy consumption, as Britain prepares to leave the EU. Dr Rabih Bashroush, joined MPs and experts for the launch of, ‘Is staying online costing the Earth?’ produced by think tank Policy Connect and sponsored by Sony Interactive Entertainment Europe, who invited Dr Bashroush to contribute to the report section on data centres. The report claims that that data centres, networks and connected devices correspond to around 3.6% of global electricity use, and around 1.4% of global carbon emissions. Dr Bashroush said, “In 2015 data centres are estimated to have corresponded to around 1% of global energy consumption, and their workload is predicted to triple by 2020, but the amount of energy needed is expected to increase by only around 3%, because of smart energy use measures and cooling equipment.” He continued, “In Northern Ireland, the Department of Finance spent £1.8 million to consolidate and virtualise their public sector data centres, saving about £500,000 a year, and reduced carbon emissions by nearly 637 tonnes. “So far, most of the focus is on making large data centres more energy efficient. However, for example, our research found that 80% of public sector data centres were small servers rooms containing less than 25 racks. Hence, going forward, work should focus on helping consolidate these facilities in order to make them more efficient.” Policy Connect, www.policyconnect.org
451 Research: Hong Kong multi-tenant data centre market continues to edge out Singapore 451 Research has published multi-tenant data centre market reports on Hong Kong and Singapore, its fifth annual reports covering these key APAC markets. 451 Research predicts that Singapore’s colocation and wholesale data centre market will see a CAGR of 8% and reach S$1.42 billion (US $1 billion) in revenue in 2021, up from S$1.06 billion (US $739 million) in 2017. In comparison, Hong Kong’s market will grow at a CAGR of 4%, with revenue reaching HK $7.01 billion (US $900 million) in 2021, up from HK $5.8 billion (US $744 million) in 2017. Hong Kong experienced another solid year of growth at nearly 16%, despite the lack of land available for building, the research finds. Several providers still have room for expansion, but other important players are near or at capacity, and only two plots of land are earmarked for data centre use. Analysts note that the industry will face challenges as it continues to grow, hence the reduced growth rate over the next three years. “The Hong Kong data centre market continues to see impressive growth, and in doing so has managed to stay ahead of its closest rival, Singapore, for yet another year,” said Dan Thompson, senior analyst at 451 Research and one of the report’s authors. However, with analysts predicting an 8% CAGR for Singapore over the next few years, Singapore’s data centre revenue is expected to surpass Hong Kong’s by the end of 2019. 451 Research, www.451research.com
Complex data landscape is posing new challenges for businesses A new global data management report, commissioned by Experian, has revealed that organisations are struggling to implement data management strategies to harness the opportunities that today’s vastly complex data landscape offers. The annual research, which surveyed 1000 employees within global organisations, found that four in five (83%) businesses see data as an integral part of forming a business strategy, yet they suspect 30% of their contact and prospect data may be inaccurate. With ‘improving the customer experience’ being called out as a top priority for 2018, the research also reports that 69% believe inaccurate data is undermining their ability to provide this. So, why are organisations struggling to grasp data management? With digital activity developing at an ever-increasing pace, and data security front of mind for consumers and businesses, 73% believe that it is often difficult to predict when and where the next data challenge will be. Add to this the growing expectation from consumers for organisations to act responsibly when it comes to managing their data, just under half (48%) of respondents believe that their customers are fully aware of how they are using their data and trust them to use it responsibly. For respondents in the UK, GDPR is also proving to be a challenge, with 64% of them believing increasing volumes of data make it difficult to meet regulatory requirements. However, reassuringly, 71% of that same UK group believe the GDPR represents an opportunity for them to refine their data management strategy. Experian, www.experianplc.com
Work-life balance of UK IT professionals lags behind US workers but equals Australians PagerDuty has announced a new study that reveals the work-life balance of UK IT professionals is lagging behind that of their US counterparts, but matches Australia data, according to a new report from the company. The survey of over 800 IT professionals across the UK, the US and Australia, found that twice as many US respondents (36%) said their work-life balance was excellent, versus just 15% of IT professionals in the UK and 16% in Australia. UK IT pros say they are more capable of managing stress than their counterparts in the US and Australia, however. More than half (52%) of IT pros in the UK indicated that a fair or poor worklife balance affected their ability to manage stress, versus 68% of survey respondents in the US and 64% of respondents in Australia. Steve Barrett, country manager for the UK and Ireland and head of EMEA, PagerDuty said, “This always-on, alwaysavailable world has become the norm for IT professionals around the globe. But it¹s taking a toll on the employees who have to drop everything to address problems.” “Without a healthy worklife balance, organisations will have employees who are either unable to perform to the best of their ability or choose to walk away. It¹s time for companies to take more responsibility over the welfare of their technical and operational teams to help workers avoid burn-out.” PagerDuty, www.pagerduty.com
March 2018 | 9
UK businesses could lose £205 million in revenue to IT downtime New research from IT server relocation specialist Technimove, reveals the real cost to UK businesses of poorly planned server migrations. The data reveals that just a 1% loss of service from such migrations could cost UK businesses as much as £205 million per year, while US-based businesses might be losing out on as much as £7 billion per year. If that 1% of lost service were applied to the big tech giants: •A pple could lose £1.6 billion in revenue per year •A mazon might see an annual loss of £1 billion •G oogle could risk up to £660 million in lost revenue Businesses large and small rely on servers to store data and run their businesses. Just 1% of server downtime could mean the website, customer databases and e-commerce functions are down for an entire day, resulting in a salary loss per employee, a dip in revenue, frustrated customers and increased stress levels for employees, with the corresponding decline in productivity. Technimove CEO, Ochea Ikpa, said, “Reducing downtime is a crucial goal for any business server migration, and executing on time and to budget is essential to the success of the project. While 1% may seem a relatively small percentage of business time, over a year this works out at more than three whole days of lost service due to server downtime.” Technimove, www.technimove.com
10 | March 2018
Ironclad TS-1 E degausser
DE RE ST AL RU DA CT TA IO N
PD-5E hard drive destroyer SSD-1E solid-state destroyer
Garner Mobile Destruction Unit with IRONCLAD Ensure no data is left behind by:
✓ Maintaining chain-of-custody by rolling the data destruction cart to the media.
✓ Erasing hard drives to NSA Standards with the TS-1E degausser. ✓ Physically destroying hard drives to NSA Standards with the PD-5E hard disk destroyer.
✓ Physically destroying solid-state memory with the SSD-1E. ✓ Tracking, documenting and generating an IRONCLAD hard drive
erasure log with images to withstand scrutiny of a security audit.
www.varese-secure.co.uk firstname.lastname@example.org Varese-secure Ltd, Lancaster Court, 8 Barnes Wallis Road, Fareham PO15 5TU. 01489 854131
on the cover
Data: How do you delete yours? Is your business properly disposing of old data? If not, your organisation could be at risk. DCN talks with Varese-secure Ltd to better understand why data destruction should be a priority and more importantly, how it should be done.
ery few items have such high liability for owners as tape and hard disk storage media. Whether it is company secrets, personal information, medical information, credit card transactions or financial data, information is a hot commodity and the vast majority of it by far is stored on some type of magnetic media. Information that is critical to the operation of our infrastructure such as power, traffic and phone could be compromised by simple theft of an inoperable hard drive. Stolen personal information for sale is already a billion-dollar industry. The loss of a hard drive could cause over £7 million in fines for a variety of privacy and ethics laws, particularly now GDPR is coming into force. Under GDPR, companies must now provide a ‘reasonable’ approach to disposing of end-oflifecycle media. But this then begs the question, ‘What is reasonable?’
12 | March 2018
How would you do it?
The HD-2XE automatic degausser from Varese-secure Ltd is ideal for office environments, erasing data in less than 60 seconds.
If your job is to permanently dispose of data stored on tape or hard drives, how would you do it? Despite the rapid technological advances in disk and tape storage, many users still utilise outdated modes of data elimination. There are many urban myths and legends about data destruction. File deletion, disk reformat, overwrite/Secure Erase and physical damage are the mainstays held over from the early days of the PC. Many see these methods as a secure way of destroying their high liability or sensitive data, but with the exception of overwriting or Secure Erase, this isn’t the case. The first two ideas, deleting and reformatting are plausible to keep casual interlopers from viewing your data, but a moderately experienced hacker can use free software to access the data easily. Overwriting or Secure Erase are better options, but not without built-in flaws.
Overwriting and Secure Erase Overwriting and Secure Erase are software based. Is there a line of code in the software that moves the supposed ‘overwritten’ information somewhere else? How much time and wealth is spent on battling software viruses? And could those viruses play a part in an over-write? Could there be back-door codes written into the hard drives for government forensics? The disk drive also has to be in perfect working order for overwriting or Secure Erase to
on the cover
take place. Is the drive capable of being overwritten? These are the questions that must be asked and answered and many of them can’t be. Overwriting and Secure Erase work on functioning drives, only if one has the capability and is willing to spend hours and hours doing so. Many businesses and government agencies still drill holes or damage hard drives with a hammer before disposing of them. In addition to being a hazard to employees tasked with using a drill or hammer to damage a hard drive, the effectiveness in data elimination is marginal in most cases.
“The loss of a hard drive could cause over £7 million in fines for a variety of privacy and ethics laws.”
Close-up of a hard drive sector using Magnetic Force Microscopy before (left) and after (right) degaussing
Incineration and degaussing: The only guarantees
Risk vs cost of preventing a data breach.
There are two fool-proof ways to eliminate hard disk and tape data, incineration and degaussing. Incineration subjects magnetic media to over 2,000°C, which returns the magnetic field to zero and vaporises much of the drive leaving only raw metals. However, in the United States, heavy incineration for metals and smelting operations have almost ceased for environmental reasons. The one true solution is to degauss. A degausser applies a magnetic force far greater than the read/write heads to eliminate magnetic data. The degausser is not a new tool, it has evolved and kept up with the technological advances of magnetic media. It was used mostly in the early days to recycle expensive tape for re-use. The only nondestructive method for erasing ‘Top Secret’ data is degaussing. Because of this, it has also withstood the test of time forensically. A degausser with sufficient strength and field orientation will eliminate all of the magnetic domains
of the drive or tape, resulting in a disk platter that is void of the deepest magnetic patterns, the servo tracks. However, all degaussers are not created equal and the strength of the degausser should not be judged on the highest power, but the power that is maintained consistently within the degaussing chamber. Orientation of the drive or tape in relation to the path of the magnetic erasure field is also important. A drive passing inside a directed magnetic field is going to have better erasure results than a drive passing over a magnetic field. The goal is to remove all of the data and as much of the underlying structure as possible from the hard drive or tape not just part of it. The one drawback to degaussing hard drives and some of the newer data tapes, is that the drive or tape is not reusable. Compared to hours long overwrite or Secure Erase sessions to re-use older drives, degaussing and replacing with a new drive is still far more cost effective. To conclude, the only sure way to avoid the liability of stored data, other than environmentally unfriendly incineration, is to eliminate it permanently by degaussing. All other forms of sanitation have serious security drawbacks with today’s technology. Varese-secure Ltd supplies degaussers and other data destruction equipment across the UK and EMEA region. Whether you’d like to buy, rent, or simply have a query, get in touch. Varese-secure Ltd, +44 (0)1489 854131 varese-secure.co.uk March 2018 | 13
centre of attention
Smart Sustainability With the rise of smart cities, it’s time to think about ensuring that they’re sustainably run. Smart cities require smarter energy. Dennis O’Sullivan, EMEA data centre segment manager at Eaton, discusses one of the ways this could be achieved, through the role of the data centre and battery storage.
he future of our planet lies in smart cities. Given their environmental impact and the increasing global population, the way most cities are currently run is just not sustainable. The United Nations predicts that by 2050 around 66% of the world’s population will be concentrated in urban areas, compared with 54% in 2014. This increase in urbanisation will generate a range of new issues which can only be solved by implementing technology and digital systems to create smart, sustainable cities.
14 | March 2018
The shift to smart cities This is already taking place around the world. Brand new cities are being built from the ground-up as countries look to future-proof their urban centres. As one example, the Internet of Things is being used in Spain to power public services such as parking, street lights, rubbish collection and transport, as well as tourism. Large waste bins are all equipped with sensors to report when they have reached capacity, enabling better waste and resource management. This technology is also being put to
use in car parks to more efficiently handle congestion, provide dynamic meter, pricing and combat air pollution. It’s a compelling way to start transitioning a city to a more connected, efficient way of working. One of the most ambitious smart city experiments to date is taking place in Asia. Songdo International Business District in South Korea is a $35 billion smart city with a potential population of two million people. Nearly half of the city is green space. It has integrated sensor networks, and has built in a revolutionary new underground system that
centre of attention
converts waste taken directly from the kitchen into clean energy. It’s been the subject of controversy as its plans far outweighed what’s been realistically achievable, but it’s created another benchmark for innovative urban development. While these increasingly connected cities enable new levels of efficiency, they will require a huge amount of power. This creates a fundamental issue – how will we be able to effectively power our smart cities of the future?
Plugging the smart city energy gap Every smartphone talking to a heating system, every overflowing bin sending data to a council, or every tracked journey on public transport is monitored, and then stored in a data centre. The world’s data centres currently consume approximately 3% of the global electricity supply – and this is set to increase two-fold every four years. Given the increasing demands on data centres and their power supply – which is only set to increase as smart cities evolve – both the energy sector and data centres will need to consider a solution. Energy use is one of the largest single cost items when managing and running a data centre. Power supply is often overbooked and under-used. Yet, if this was to be managed with industrial-grade storage, it would not only guarantee a consistent power supply for the data centre, but would also open up new possibilities for the energy sector. Unused electricity could be stored to be used as required in the data centre, as well as creating an opportunity to feed it back into the grid when demand is high. This flexibility would ensure flux could be effectively managed, creating a more consistent and resilient national energy supply which would enable smart cities to function as planned.
Ambitious experiment: Songdo International Business District in South Korea is a $35 billion smart city with a potential population of 2 million people.
Battery storage would shift the data centre away from being a costly, energy-guzzling resource, to instead become a powerful energy hub that can power nearby businesses, heat offices, and keep the lights on in local communities. The falling cost of battery storage creates a timely opportunity for data centres to invest in this technology and become the power sources of the future. However, data centre operators must be careful to invest in technology which can not only guarantee the grid and energy cycles are working efficiently, but also ensure that it is scalable for the future as energy demands evolve.
Clean energy sources of the future As society strives to lighten its carbon footprint, clean and carbon-neutral energy sources will become the obvious option to power sustainable smart cities. However, the electricity generated from solar and wind is typically hard to store, and is often generated at times when it may not be needed. Investing in industrial-grade storage will allow data centres to rely on these renewable sources, storing
“The world’s data centres currently consume approximately 3% of the global electricity supply – and this is set to increase twofold every four years.”
excess energy to use later, when it is not required. One country currently paving the way for this type of change is Sweden. It has created revolutionary new green data centres where the excess energy generated is used to heat residential properties. That means a data centre with a 10MW capacity can heat 20,000 flats. This huge step forward in sustainable energy management is being welcomed by major brands. Companies like high street retailer H&M and telecoms giant Ericsson are amongst the many organisations moving their data centres to places like Stockholm Data Parks as a way to combat climate change and create more sustainable businesses. Learning lessons from current deployments, understanding the need for efficient energy storage, and keeping citizens at the heart of whatever is delivered will take smart cities from being a hypothetical concept to a realistic one. However, transforming large data centres with a capacity to store energy will be a key step if we are to create sustainable and energy-efficient urban centres to manage future urbanisation. Eaton, eaton.eu March 2018 | 15
meet me room
Stefan Hoelzl – Exasol Stefan Hoelzl, VP of international sales at Exasol, specialist for in-memory analytic database technology, discusses childhood dreams, life lessons and of course gives us his industry insight. Looking back on your career so far, is there anything you might have done differently? With the benefit of hindsight, there are a lot of things that I might have done differently in my career. One decision I might have liked to have changed in the past was the decision to not move to America when the opportunity arose. I used to work at Symantec, and I had an opportunity to move to Silicon Valley and work there. Whilst it was a tempting proposition, I declined the opportunity as I didn’t want to uproot my family at the time. However, I wouldn’t be where I’m now if I had made different decisions and I wouldn’t have the experience I have now. So I’m not upset with how things have turned out. What is the main motivation in the work that you do? The main motivation I have in my job is to see customers happy with our product and the services we provide them. Exasol’s database really does have the ability to not only transform a business’ analytics, but it can also help to change people’s lives by making their analytics easier, freeing them up with more time to be more productive elsewhere in the business. Something else that motivates me is helping my peers and other Exasol employees to help make them better. I enjoy watching my team learn, grow and succeed at work, and being able to help them not only motivates me, but it also helps to make our customers happy. 16 | March 2018
A man of many talents: In 2010 Stefan made the finals of the biggest horse reigning show in the US
meet me room
expanding in the regions that it is currently located in, through its partner network. By increasing the recruitment of new partners, this will help our business to extend its current global reach.
Hawaii is Stefan’s go to holiday destination – and we don’t blame him
How would you encourage a school leaver to get involved in your industry? What are their options? The data analytics industry is incredibly exciting and with data becoming more important to businesses than ever before, there are many opportunities for school leavers to develop a meaningful career within this sector. One option for them would be to join Exasol, as we are currently hiring. Exasol is the first company I have joined where school leavers are not just hired to be technical sales support, and there are plenty of roles throughout the business available at a junior level. The most important thing for school leavers to bear in mind though is to find the right company, big or small, that fits their character and who will be willing to help you learn on the job. What are your company’s aims for the next 12 months? Exasol is a high-growth organisation with ambitious plans and it is currently developing its business internationally. We currently have offices in the UK, Germany and France, and we are now looking to open offices in the United States as well within the next year. In addition to the expansion into the US, Exasol plans to consolidate its global growth by
“You are always part of a companies future, rather than its past, and so you always have to be ready to prove yourself.”
What is the toughest lesson you have ever been taught in your career? The toughest lesson I have ever learnt is that the technology industry is fast-paced and it can be fickle at times. Your achievements from yesterday do not count for your achievements of tomorrow. If you’re young and you do great for eight or nine years, then that’s fantastic. However, things can change within companies – management might leave, the business may be acquired by a rival – and circumstances can change. Then everything you’ve done in the past is not particularly relevant and you have to continue being fantastic to show a new boss or company what you can do. You are always part of a company’s future, rather than its past, and so you always have to be ready to prove yourself. What are your hobbies/interests outside work? I have several hobbies outside of work, but my main passion is the reining horse industry. I was first introduced to it years ago – I actually started riding horses when I was a child – and in 2010, I qualified for the biggest show in the horse reining industry, which is held in Oklahoma City in the United States. I actually made the finals that year! I’m still involved in the industry to this day, as I help with breeding, riding and training where I can. I have currently have five horses as well so they certainly keep me busy. Where is your favourite holiday destination and why? My favourite destination has to be Hawaii because it has mountains, jungle, beaches, warm waters
– the list goes on. There may be somewhere better to find each of those things separately, but there’s nowhere else where you can enjoy all these things so close together. For example, where in the world can you 3500m high for picturesque scenery, and then drive back down and go surfing in the warm water? In general, I’m not one for laying on a beach all day, I prefer to be doing things and exploring, so Hawaii is perfect for being active and checking out the local area. Hence why I’ve been back to visit several times. Can you remember what job you wanted when you were a child? Yes! When I was younger, I always dreamed of becoming a documentary producer, especially for shows that portray animals in the wild. I was probably seven or eight years old when I decided that was what I wanted to do as a career, because there was a German TV programme showing animals in the wild, including lions, tigers, elephants and giraffes. Seeing all these animals in such exotic locations made me want to experience seeing them in real life. What is the best piece of advice you have ever been given? The best piece of advice I have ever been given was from my grandma. She taught me the true importance of family. It really doesn’t matter how much money you have or what hobbies you have, because at the end of the day your family is the only thing that you have left when things go wrong or when you go through tough times. So it is important to take care of it and look after it. Of course, it is good to be successful and have money and a nice life, but these are just gifts, so they must be cherished and you should understand how lucky you are to have them, rather than take them for granted.
March 2018 | 17
Show Time Last week DCN was at the London ExCel for Data Centre World 2018, as a media partner for the show’s tenth consecutive year. As always, the event was packed full of leading data centre and technology providers, showcasing new products and sharing their knowledge with the data-verse.
midst the buzz of busy stands, flashing arcade games, caricaturists, and popcorn carts, DCN sat down with a few companies to find out what they were up to.
What’s new? Infrastructure specialist Sudlows was at the show highlighting the newest addition to its data centre testing and commissioning toolset, 18 | March 2018
the A:LIST load banks, the core of the company’s Advanced Load Integrated Systems Test. The company explains that these specialised load banks have been developed specifically for testing critical data centre environments and aim to deliver an accurate replication of the loads which will be installed, giving all concerned critical insight preinstallation completion. Sudlows says that the A:LIST system has been
developed to accurately represent modern IT, giving a closer representation of the heat load of a server in both air flow and power. Using a load bank matching the planned load, it provides data centre operators and technicians the peace of mind and planning insight needed to run plant assets efficiently and effectively. The A:LIST system’s abilities have been recognised in recent times, picking up a highly
commended honour at the 2017 UK IT Industry Awards in the ‘Infrastructure Innovation of the Year’ category. In a category featuring organisations such as BT, Rolls Royce and IBM, the Sudlows A:LIST was given special recognition by the judging panel alongside the category winners Romonet. Also in 2017, A:LIST won the ‘Data Centre Facilities Management Product of the Year’ at the Data Centre Solutions Awards. On the ProLabs stand, DCN found out about its latest offering – the EON-121, a low power plug and play 100G DWDM transponder. The transponder converts a 100G QSFP28 client interface to coherent CFP for metro, long haul and DWDM applications. The client port supports all transceiver types – including DAC (Direct Attached Copper) cables for ultimate convenience and cost savings when connecting to host equipment. The line interface supports coherent C-Band tuneable DWDM CFPs with 50GHz spacing, allowing for metro and longhaul connectivity greater than 1,000km. Each EON-121 unit is a half width 1U module, meaning that two EON-121 modules can fit in to a 1U EON-2 chassis system providing 200G of connectivity on two DWDM wavelengths. This transponder is set to become part of a full range of products as the concept develops and orders for the EON-121 can be placed now.
Rittal’s ‘Data Centre in a Box’ range
Think outside the box Or firmly inside it in Rittal’s case. ‘Data Centre in a Box’, is quite literally what it says on the tin. Designed to replicate all of the key data centre capabilities, but on a much smaller scale, the Data Centre in a Box (DCiB) concept enables equipment to be deployed in non-traditional data centre environments. The DCiB’s standardised infrastructure supports simple operation for IT staff, whilst its TSIT rack platform ensures speed of deployment for active and passive equipment. The rack itself is 19 inches, making it the ideal basis for all IT network and server rack technology requirements.
You won’t find any complex installation here, as this can be accomplished tool-free, with system accessory mounting using new snap-in technology. The side panel is also divided with quick release fasteners, integral lock and internal latch. Intelligent cable management is utilised, with a multi-functional roof for side cable entry, for maximum userfriendliness and free air flow for active components. The compact nature of the design facilitates easy deployment and redeployment, and demandorientated climate control ensures efficiency and low PUE. A fire
March 2018 | 19
detection and suppression system comes as an optional extra, to protect critical infrastructure, and system monitoring enables instant messages to be sent, forewarning of upcoming issues. An optional back-up power supply also provides resilience in the event of poorly conditioned power/total failure. All round, Rittal’s DCiB is a complete solution, encompassing damage control and a userfriendly design, to ensure peace of mind from point of purchase, through installation and ongoing maintenance. With three different options in the range to choose from, you are bound to find something to suit your requirements.
A real asset DCN also talked real-time tech with RF Code, a specialist in this field. Focused on taking the manual approach out of assets stock assessment and management, the company’s products deliver on real time visibility when it comes to the 20 | March 2018
location and performance of data centre assets, allowing operators to run their plants more efficiently and effectively. Using ‘where are they and how are they’ information, RF Code argues companies can gain all sorts of vital efficiencies for in-house assets or the ability to deliver USPs over the competition when providing data services for others. The company was also highlighting its move a couple of years ago to a subscription model, meaning RF Code is able to provide complete and on-going access to its systems at a wide range of support levels, matching the needs of each company and building bespoke models for each contract. The RF Code team explained that selling their services begins with looking at how much it costs a company to manage its assets manually, coupled with how efficient it is and then comparing it to what it can deliver. RF Code challenges companies to look at what they are doing critically and examine what can be done better.
Entering DCW day two
Recent improvements to the system include a more intuitive and customisable dashboard, to see exactly what is going on across all assets. Updated sensors will also be able to perform tasks such as guiding technicians quickly and easily to exactly where an asset is located. When was the last time you checked your assets? And how do you know if things have changed since the last time you did an audit? In today’s changing data centre environments, critical assets can move (and get misplaced) several times during their lifecycle, and outdated manual processes for asset tracking can’t keep up. Peter Vancorenland, CTO for the company, further underlined some of the abilities of the system; “Because we collect data every 30 seconds, we can create more efficient systems. Data centres can run closer to their capacity because operators can have hands-on, real-time insight into what is going on with their assets and make adjustments where needed.” The company is focused on data centre delivery currently, but it’s pretty obvious this type of information gathering in real time could be spread out into other types of assets and buildings themselves, providing facilities managers with real insight into what is happening and where.
Colourful characters Lured in by seemingly all the colours of the rainbow, DCN popped over to the Geist stand to see what was on show. It turned out to be Geist’s Upgradable power strips, which are designed to give data centre managers the flexibility to install the intelligence they require, with the option to upgrade technology as needs evolve. The aesthetically pleasing sockets we spotted are colour coded for each circuit and are
lockable simply by pressing down to both secure and release. The power strips provide hotswappable intelligence, allowing you to add, remove or switch Interchangeable Monitoring Devices without interrupting power to critical servers. Faulttolerant daisy chaining simplifies intelligent PDU connectivity and ensures data is reported even with a break in the chain. Reading power consumption data has also never been easier. They say there’s an app for everything and Geist has an app for just that. The Geist mobile app provides full visibility of the PDU and its power consumption data, down to the phase and circuit levels, by simply scanning your phone over the PDU. Using Visible Light Communication technology, Geist’s Upgradeable line of PDUs optically transmit information to your device providing easy, secure, and instant access to data at the cabinet or rack. Needless to say, we were impressed.
What lies ahead
Upgradable PDU’s showcased on the Geist stand
DCN also had a very interesting chat with Jack Pouchet, vice president business development of global solutions at Vertiv, one of the industry’s most vocal figures. First up, the often positive, but not much talked about economic impact of data centres on their respective locations. “Jobs created by data centres are often high skill and high-wage, bringing obvious benefits to a local economy,” argues Jack. But the positive impact does not end there, as the trickle-down effect on related services looks to pull in other specialised tech trades such as cooling or less specialised areas such as catering. Trends wise, Jack believes that smaller edge located data centres will definitely be a big feature of future development. As well as delivering the power closer to where it is needed, smaller stations of course also have the advantage of delivering efficiency, potentially in some cases being self-sustaining
with improved battery technology performance and other options, such as wind and solar, all combining to deliver a zero impact energy footprint. This could be the case for stand-alone operations or those located within the building of the end-user company. Jack says that the need to deliver connectivity for the IoT (domestic and commercial) will also drive more edge compute, but to do it, more fibre will need to go into localised infrastructure. For larger concerns, grid connected energy storage is also becoming not only more viable, but potentially profitable for big companies, as stored energy can be pushed back into the grid, particularly at peak load times when energy is most needed and most valuable. With many data centres being over specified down the years due to fears over reliability, the potential is there to dip into the overhead and finally make it do something more useful than provide peace of mind, which as efficiency and reliability increase, is less necessary. Jack adds, “It is cost that is driving the development of the hyperscale data centre; you build enough of these, efficiencies arise physically, but you get better at doing it, so design becomes streamlined.” However, big data scandals, like Facebook and Cambridge Analytica, will have an effect, says Jack, with companies perhaps being put off putting all their eggs in one big basket, effectively controlled by someone else. A more mixed model where cloud-only becomes less attractive may well emerge. Jack also had some thoughts on the impact of GDPR, “I would go long on storage, people will need and want to pull some of that back in-house, as colocation and cloud might not give them the control they want. We recently had a request for a large data centre build that was half storage.” March 2018 | 21
APC by Schneider is currently making some real noise in the industry
22 | March 2018
The power of three When we met up with Schneider Electric, it had a triple whammy of announcements. First off, we found out that IP House, a London-based data centre startup, has selected Schneider’s EcoStruxure IT, a next gen cloudbased Data Centre Infrastructure Management (DCIM) platform, to provide 24/7 monitoring of its ISOaccredited facility. IP House’s clients depend on both uptime and 24/7 connectivity to business-critical applications hosted within the data centre. Therefore having the ability to proactively monitor all elements of the infrastructure is crucial. EcoStruxure IT is the industry’s first vendor neutral Datacentre Management as a Service (DMaaS) architecture, purpose-built for the hybrid IT and data centre environments. It provides global visibility from anywhere, at any time on any device and data-driven insights into critical IT assets, which
helps customers mitigate risk and reduce downtime. In the case of IP House, it will reassure its customers they are being provided with a secure, competitive and resilient colocation service, that safeguards them against downtime. The second announcement was an interesting collaboration between Schneider and StorMagic, which has now launched a ‘Branch in a Box’ integrated micro data centre solution, designed specifically for hyperconverged and edge computing environments. StorMagic’s ready-to-deploy bundle includes hardware from APC by Schneider and Dell/ EMC, with the virtualisation software needed to power, process and store mission-critical information. ‘Branch in a Box’ enables organisations to adopt a simple and highly energy efficient IT solution, which is optimised and ready for rapid deployment on-site. Its ‘plug and play’, preintegrated architecture creates a reliable and robust environment to leverage the best of on premise and multi-cloud infrastructures. And that isn’t all. It’s been all systems go over at Schneider HQ as it is now expanding its use of Li-Ion technology for its single-phase Smart-UPS portfolio with APC’s new line of Li-Ion Battery UPS. With Li-Ion embedded technology, Smart-UPS On-Line offers several maintenance and cost saving benefits, including a longer lifespan, smaller weight and size, a smaller footprint, lower maintenance and TCO, as well as improved safety, better performance and remote management capabilities. The solution can also connect to Schneider’s aforementioned EcoStruxure IT platform, allowing customers to leverage data driven insights about the health and status of their UPS devices to simplify maintenance and optimise performance.
Industry innovations Engineering services-led principal contractor JCA, were making some noise at DCW, and rightly so, as it has just delivered one of the UK’s most advanced data centres for its client, Kao Data. Offering flexible configuration and environmental performance credentials, JCA’s design and construction of Kao Data London One followed the innovative principles of the Open Compute Project, one of the first carrier neutral wholesale data centres to do so. Officially opened last month, Kao Data London One is the first of four planned data centres that will comprise the Kao Data Centre Campus on the 36-acre Kao Park development, located in Harlow on the London-Stansted-Cambridge technology corridor. Each building in the £200 million campus will be split into four halls, totalling around 150,000sqft of net technical space. In addition, each technology suite will be capable of supporting a 2200KW IT load, representing a total technical load of 8.8MW per data centre. Rack densities up to 20kW and beyond can be accommodated within the data hall, which is designed around the principle of hot aisle/rack exhaust air stream segregation with flooded style supply air distribution. Kao Data London One offers the latest incarnation of indirect evaporative cooling systems. These are so efficient that there is no requirement for any form of mechanical cooling, which assists towards total facility PUE of 1.20 even at part load. JCA designed Kao Data London One to deliver market leading efficiency from the outset, with the innovative use of technical infrastructure and building engineering expertise to provide Kao Data with a campus that provides the highest
standard of data resilience, operational sustainability and connectivity for national and international customers, as well as environmental performance. As a result, Kao Data London One achieved BREEAM Excellent Design Certification, which we think is more than well deserved.
Better together As with most things, two heads are better than one and that was certainly the case for the latest venture between Cloudian and Digital Alpha, which has seen Cloudian secure a funding commitment of $125 million to power enterprise object storage growth. The joint venture includes a utility financing facility of up to $100 million from Digital Alpha to enable flexible procurement options to accommodate customers’ rapidly growing storage environments. It also includes a $25 million equity commitment to support expansion of Cloudian’s sales, marketing, engineering, and customer support organisations.
The $100 million financing facility will enable consumptionmodel procurement of Cloudian products. Today, IT groups are under pressure to deliver services on demand, free of up-front capital outlays. While the public cloud is right for some of these workloads, others are best kept on premises. Through this partnership, customers can take advantage of an on-premises solution that provides the data sovereignty, performance and control of enterprise storage together with the economics of a pay-per-use model. Digital Alpha will also support the development of a partnership between Cloudian and Cisco Systems, covering relevant data centre architectures. Cloudian’s data management features, integrated cloud-storage management and broad interoperability make it a uniquely differentiated solution. In 2017, the company doubled its installed base to more than 200 customers. This joint venture will capitalise on the dramatic growth of object storage in 2018, which is driven by
the convergence of massive data growth, the emergence of the Amazon S3 API as a de-facto industry standard and accelerating artificial intelligence/machine learning and IoT use cases. In addition to this announcement, DCN had a very interesting chat with Cloudian’s vice president, Neil Stobart, regarding diversity within the data centre and technology industry. In particular, why there is such a drastic gender split among these disciplines and why it’s important that we proactively encourage more women to enter into the field. Watch this space for an in-depth article surrounding the topic.
Kao Data London One
March 2018 | 23
software & applications
Be â€˜Appy Dirk Marichal, vice president EMEA & India at Avi Networks, explains why the modern digital business needs to become more application-centric in its approach to IT.
redicting the future is an inexact science, but when it comes to enterprise IT, the last few years have seen a clear trend away from obsessing over infrastructure, to focusing on the needs of applications. This is hardly surprising, given that applications are now widely recognised as the lifeblood of the modern digital business, making it a trend thatâ€™s bound to continue. There is, however, still much that needs to be done by both the IT industry and its customers if the enterprise is to become truly application-centric.
Getting there slowly On the positive side, a lot of the groundwork has already been done to free enterprise IT from its infrastructure shackles. The majority of companies, for example, will have long since switched from physical to virtual platforms, with many now also embracing private and public cloud solutions where DevOps teams may know little, or nothing at all, about the supporting infrastructure theyâ€™re tasked to use. As a result, hybrid infrastructures, spanning on-premise and private/
24 | March 2018
public cloud platforms, have become commonplace, but these can be something of a doubleedged sword. On one side, enterprises gain the ability to choose the infrastructure needed to power their applications based on unique services, price, location, compliance, security, speed and so on. Thus empowering them to make decisions based on the demands of the application, instead of being limited by the offering of a single infrastructure environment. On the other side, hybrid environments introduce additional complexity, as different infrastructures (e.g. data centre and public cloud) rarely fit together seamlessly. As such, IT teams can still be bogged down by the need to consider the detailed requirements of their infrastructure as part of application provisioning, scaling and management processes. More than that, individual differences still dictate what can be done. This leads to multiple implementations of the same application, with associated functionality, compatibility and management issues and little opportunity when it comes to crossplatform scaling and portability.
software & applications
Add legacy applications and platforms to the mix, and all too often the end result is a hodgepodge of application silos, each with its own specialist team still very much focused on infrastructure first and applications second – the exact opposite of application-centricity.
Abstraction and application-centricity Key to becoming application-centric is abstraction – which is effectively managing out the complexity of a technology to make it simple for the user, as with electricity, for example. End users don’t have to know or understand how electricity is generated, stored or delivered, just that if they flip a switch, the lights turn on and power becomes available at the outlets. True application-centricity requires the same kind of abstraction, to enable DevOps teams to treat infrastructure as a commodity, just like electricity. This will empower them to develop, deploy and manage applications, without having to even consider the availability or capabilities of the supporting platforms, services and networks. They should be
“Applications are now widely recognised as the lifeblood of the modern digital business.”
able to treat these all as a single commodity and assume that the infrastructure their applications require will be available whenever and wherever it’s needed. More specifically, in an application-centric world, IT teams should be able to take an intent, or outcome-based approach to application development, deployment and management. If an application needs to meet a certain level of security, for example, they should be able to assume that the necessary services, like a Web Application Firewall (WAF) or performance management tools, will be automatically delivered to achieve this outcome. In an application-centric world, DevOps teams should be able to code, deploy and manage applications based on outcome. Moreover, they should be able to do so in a consistent manner, rather than have to deal with application snowflakes; each needing different infrastructure-specific processes to deliver necessary services like load balancing, WAF, and performance monitoring.
Delivering application-centric IT Of course, meeting this ideal is fine in theory, putting it into practice is harder. Automation from application to infrastructure is most readily available from a single provider, like Amazon’s AWS for example, leaving enterprises committed to hybrid and multicloud initiatives to seek out platforms and middleware that enable similar levels of automation and infrastructure abstraction. It’s not easy, but here are some simple characteristics to help identify application-centric services able to abstract away infrastructure complexity:
• Multi-cloud, software-defined services Hardware is, by default, infrastructure-centric and can’t come to the cloud. Look for software solutions agnostic to data centres and clouds, as the same solution delivered across all your environments ensures consistency, simplicity, and automation for your applications. • Centralised controller Enterprises need a single pane of glass to provide services to any application across all environments. Segmenting by infrastructure silo is a clear sign that infrastructure, not applications, are driving the business. • APIs and integrated automation Application services need to easily communicate with applications and infrastructure. Look for solutions that have robust API technologies and for solutions that integrate with your tech stack, so they can command the automation of the underlying infrastructure and other services.
Conclusion Tapping into the power of multiple clouds and data centres can lead to amazing efficiencies once the complexity of infrastructure is abstracted away. However, this goal can only be realised by platforms and middleware able to intuitively bridge the gap between applications and the underlying infrastructure. The modern digital business can only move at the speed of its applications and will only be able to accelerate further when it can treat infrastructure as a commodity, putting applications firmly in the digital driving seat. Avi Networks, avinetworks.com March 2018 | 25
software & applications
Building Better Martin James, regional vice president at data management company, DataStax, discusses what we should consider when building the next generation of applications, in order to get ahead in what he calls, a ‘right-now’ economy. Building the next generation of applications Companies of all sizes are looking at how they can build better relationships with their customers, and most businesses are looking at digital to achieve this. Based on the examples of new companies like Airbnb and Uber, that have changed their respective markets, new technology can play a critical role in achieving success in an economy where every customer wants things ‘right here and right now.’ These companies have approached markets from a new viewpoint that sees business issues 26 | March 2018
as software problems that can be either automated, or re-engineered based on data. However, only 28% of executives surveyed by CCA Global around customer experience, reported that they were well equipped to meet future digital capabilities and expectations. The skills to manage data, to understand complex customer requirements and to know how best to use software are difficult to acquire, and are heavily in demand. So, how can companies survive and thrive in this ‘right-now economy?’ What does it mean to
use technology to help deliver great customer experiences? And how can you think about the future of software in your organisation?
What does the future hold for application design? For many CIOs and IT professionals, looking at how the likes of Uber or Spotify do business can seem like a completely alien approach. After all, most companies will have existing IT systems that produce large amounts of data, but nothing at the scale that modern internet-based businesses have. For traditional
software & applications
business managers and executives, the world of code can be difficult. Yet these companies all started with a clean sheet of paper or a whiteboard, and a goal based on disrupting an established market. However successful their approaches have been so far, they have used new technology and customer data to design better services than were available on the market previously. They were willing to iterate, build faster feedback systems and rework how they served customers, so that they could see what worked and what did not. Over time, they have become closer to providing customers with what they want as fast as possible, serving them in the moment. For traditional businesses, this approach to data is something they can adopt as well. By building on existing data, you can start to get closer to your customers and serve them faster. By understanding software engineering and customer requirements together, you can also become more agile and responsive to customers. The biggest challenge here is scale. Taking data created by services can provide insight into what your customers are doing, how they prefer to be served, and what actions they may take next. However, retaining all this data and making it usable – especially in real-time – means looking at how it gets handled and analysed too. If you choose to store data on all the actions that users take over time, you will end up with huge amounts of information very quickly. Keeping up with this growth in data is one issue, while being able to deploy analytics around this data is another. Not only will you have to be able to analyse that data and get something useful from it, but you should also do this while a customer is carrying out an action. Handling these issues will
be necessary in order to respond in the moment – anything slower or after the fact runs the risk of either missing out on an opportunity, or aggravating the customer by not being smart enough.
the customer sees can help here. The data layer can handle the vast majority of requests for service – particularly those that involve simply reading existing data, rather than writing new data.
How can you respond to new ways of working at scale?
The new rules for new applications
Building these new services will often depend on use of cloud – whether this is based on public cloud services like Azure or AWS, run in private data centres or in a hybrid fashion that combines the two. Whichever approach you take, the application itself will have to run consistently using cloud as a key part of its infrastructure. That is not to say that you should simply make use of the tools that AWS and the like provide. In fact, taking your own approach can be a faster route to creating competitive advantage over time. Rather than relying on specific cloud functionality – which can change or develop based on what the public cloud provider wants to offer, rather than your specific needs – it can help to retain some data autonomy and control. Similarly, it is important to look at how you can scale out your new applications over time. Scaling up should not involve complex migrations or expensive hardware – instead, scaling should be predictable. For example, rather than having to implement new services or sharing databases to cope with growth, data management should scale based on simply adding more nodes. For companies with large traditional IT implementations, this can be more difficult, particularly when implementing new webbased services that customers can access at any time. However, creating a data layer between the existing IT and the service that
In order to build new applications that can keep up with consumer demand, there are five key pillars to consider:
“The skills to manage data, to understand complex customer requirements and to know how best to use software are difficult to acquire.”
• Is the data in my application relevant? Can the app use this data in context, so that the customer gets what they need? • Is the app available whatever happens? Can it survive an issue in my data centre or in the cloud provider without any impact on the customer? • Is the app responsive? Does it provide the right service in the moment when a customer is making a decision, rather than after the fact? • Is the app accessible? Can it be used by someone over any channel they want to use? • Is the service engaging? Does it use data to provide better experience and interaction than your competitors? These five considerations all rely on having the right data behind the service, available and useful for when a customer wants to interact. By building on these elements and understanding how software assets and data can be used together, you can help the business engineer itself to meet the needs of customers in this ‘right-now economy.’ Ultimately, this understanding of the future for software should provide a better experience for your customers. DataStax, datastax.com March 2018 | 27
software & applications
It’s Complicated Tracking software contracts – it’s a complex thing. Vincent Smyth, senior vice president EMEA at software management specialist Flexera, looks at how visibility into licensing can bring you more bang for your buck.
omplicated. It’s the best word to describe today’s world of buying software and cloud services. Cloudservice use continues to expand (five times faster than the rest of IT). What’s on virtual machines (VMs), which can shift quickly, requires constant license tracking. Then there’s all the variables – multiple publishers, complex terms and conditions, tracking monthly subscriptions and more. And finally, the practice of ‘shadow IT,’ where today’s easyto-buy software comes into the company without IT involvement.
Balancing act for IT procurement teams IT purchasing teams face a big challenge – how to manage a moving target where users, licenses, technology and business needs constantly change. With frequent shifts, it’s tough keeping costs in check and meeting the current and future infrastructure requirements of the enterprise. IT teams must navigate complexity, including hundreds of negotiations for individual software contracts across multiple software vendors. Virtualisation and cloud surprisingly complicate matters because VMs can move from one physical host to another, and the software running on each VM must be adequately licenced. Licenses also have to be monitored more often when they move from perpetual to subscription licencing. It all adds up to needing visibility into tons of information. 28 | March 2018
Visibility – Getting more for your money To make the most of IT spend, IT procurement teams require a deep look at what the organisation is using and the intimate details of what’s involved in that use. To deal effectively with new license agreements, enterprise agreement true-ups and contract renewals, IT needs clear visibility into the current position of the enterprise with each of its key software vendors. IT needs deep, thorough licensing knowledge.
A deep dive – What to know and hidden opportunities The days of tracking software agreements using spreadsheets are gone. IT teams need to know all the details on software license agreements, including what’s in effect, what the purchase includes, terms and conditions, and the total cost of each agreement over a given period of time. Hidden opportunity? By having access to purchase order (PO) data and reading the stock keeping unit
software & applications
(SKU) of each software product line item in each PO, Software License Optimisation tools can determine exactly what software products have been purchased and under what type of purchase agreement.
License details To make the most of software spend, IT purchasers must know the number of licenses purchased and more. Important details include the types of license held now, what’s needed in the future, the exact entitlements, level usage, what named licenses are assigned to on what device, and business metrics for software that charges based on revenue or employees. Hidden opportunity? Information can reveal users who have more than one license allocated for the same product. It can also uncover who has installations of the same software on multiple machines, such as desktops and laptops that might be covered under a single license.
“As you explore automation, look for a solution that offers an enterprisewide view of all software.”
What do you have? It’s a foundational piece of information that plays a key role in managing your spend. Discovery should include what products are installed, but also what software packages and software bundles are out there where costs and usage may be calculated differently. Any thirdparty systems that indirectly access data from software products such as SAP or Oracle may also require licenses. Hidden opportunity? Knowing the difference between licenses for products and software bundles may save you money. An individual license may be in place for a user covered by a bundled license.
Physical and virtual infrastructure exploration Especially for data centre and cloud environments, it’s important to know details about the licenses for physical and virtual servers. IT teams can plan better knowing where software is running, what physical processing resources a server consumes, and what VMs are running on which physical server hosts. Hidden opportunity? Since many publishers are moving to capacity-based models that calculate license fees based on physical host processor capacity, understanding your capacity needs can help you manage costs.
Explore software usage The heart of license optimisation is who is using what. Your control over spend increases when you know the number of licenses being used, whether you are paying for maintenance on a product no longer in use, what product features
are actually used and whether users directly or indirectly access underlying data. Usage trends for software and cloud services also play an important role in future planning. Hidden opportunity? Indirect usage data helps an enterprise identify how users access products and data to uncover if they have multiple licenses when they only need one.
The good news? Automation simplifies the deep dive Because of today’s complex and constantly changing software environment, it’s not practical to keep up with hundreds of licensing details through manual tracking. The good news is that Software Licensing Optimisation solutions have kept pace and can provide answers needed by the IT team. As you explore automation, look for a solution that offers an enterprise-wide view of all software, both on premise and cloud, and complete visibility into spend, including cloud service costs. Reporting features should offer the ability to slice-and-dice information in various ways – including actionable reports, dashboards and tracking key indicators for trend analysis. Other important functionality includes reconciliation of installed software with purchase orders and license entitlements, notifications about renewals or expirations, detection and notification of publisher-issued changes to license agreements and an audit trail to maintain compliance. With visibility into deep information and important metrics about software licenses, IT teams will find the silver bullet to optimise software spend. And get more bang for your buck. Flexera, +44 370 871 1111 flexera.com March 2018 | 29
software & applications
Accurate Analytics Zahl Limbuwala, CEO at Romonet, provider of predictive analytics software and services, discusses why data centres need smart analytics and accurate data to increase energy efficiency.
ith corporate data surging – research estimates it’s doubling every 14 months and will reach 10.5ZB by 2020. Consumers are constantly connected to their smart devices and businesses have become increasingly reliant on their data centres to deliver high quality services and continuously improve operational performance. As a result, data centres now have a significant impact on business strategies and goals and have become a high priority for the C-suite.
30 | March 2018
It’s no longer sufficient or acceptable to provide estimated financial justification or not be able to prove the payback and ROI of data centre investments and improvement programmes. Evidence in the form of hard, high quality data is expected by senior management to unlock investment budgets. Consequently, data centre managers need to be as confident in their data as possible to ensure they can meet the business expectations. Irrespective of whether the facility is owneroperated, cloud or colocation, everyone is required to prove its financial predictability, operational stability and performance.
Besides the financial implications, data centres also have a massive impact on our environment. This invites close scrutiny from various government and regulatory bodies who want to make sure facilities are environmentally sustainable and don’t undermine their efforts to lessen any negative impact on the environment. The data suggests more work is needed in the industry. Data centres consume massive amounts of energy - in January 2016 analysts estimated that data centres were using about 416.2 terawatt hours per year, with this amount expected to increase three times in the next decade. To put things into perspective, the entire UK with a population of over 65m people and over 5.7m businesses consumes about 300 terawatt hours during an entire year.
software & applications
Predictive analytics for increased performance And yet, many data centre facilities are still operationally and financially managed using poor quality tools and poor quality data, producing low quality results that lead to poorly informed decisions. However, as the IoT is evolving and the number of sensors and data collection points is skyrocketing, an increasing number of organisations are now adopting smart analytics solutions to improve the forecasting, construction, management and analysis of these facilities. Predictive analytics, real-time reporting and machine learning capabilities have proven their value and become crucial tools in the decision-making process for many organisations in recent years. But as analytics solutions for data centres are becoming increasingly popular in this space and are supporting more companies in achieving data-led strategic decisions, facilities managers are now confronting a new challenge – the quality and accuracy of the data they collect for analysis. Many data centre facilities using DCIM tools rely on a misconception that because a sensor or meter is highly accurate, the data streamed and collected from it can simply be used in its raw state. In reality, sensors and data collection networks collecting equipment and environmental performance information should never be used in raw format. In fact, in our broad experience, on average, only 60-65% of data of raw data is even suitable for cleaning, validation, normalisation and labelling prior to being used for any sort of analysis. And this situation is set to become even more complex as data centres add more and more sensors and metering points as the data centres themselves become much more dynamic. It’s already impossible for humans alone to
Romonet’s cloudbased platform presents reports and dashboards to every role in the company that needs insight into this critical asset.
manually verify and clean the vast amount of data constantly streamed from these sensors. Software and high bandwidth data processing pipelines with data cleaning tools and techniques are needed.
“Predictive analytics, real-time reporting Validated data for and machine smart decisions learning If the input data is not accurate capabilities then the analytics solutions, no matter how smart and have crucial sophisticated they might be, tools in the can’t deliver authentic, reliable results that truly reflect the decisionfacilities’ capabilities, the correct areas that need improving and making the best solutions for increasing process performance. Neither can they accurately predict what will for many in the near operational organisations.” happen future in order to reduce risk.
As a result, data centre managers have to look for analytics solutions that have the added capability for cleaning, validating, normalising and labelling the raw data before processing and analysing it. By continually analysing metered data against predictive models and validating performance against calibrated performance baselines, data centre managers can be automatically pointed to issues that cause energy inefficiencies, increase operating expenses and risk. Also, they can reach vital CSR targets that are
becoming increasingly important for the industry, more easily. Accurate, validated data is not vital only for data centre managers who must accurately predict performance under fluctuating workloads and climate conditions, but also for business leaders charged with crucial corporate and financial decision making responsibilities. Predictive analytics can also provide crucial insights regarding the potential performance of a site and if a specific build will meet its business objectives even before a data centre is built or upgraded. In this way senior executives can make better informed decisions, justify and validate investments and track ROI. The bottom line here is that business leaders need to analyse ROI and the performance of investments required by these facilities and data centre managers must stay on top of availability, capacity planning, energy usage and operational costs. But neither group can achieve their targets unless they adopt smart analytics solutions that not only automatically collate and track data but also ensure that the input metrics are accurate and reliable for both short term decisions and long term strategies. Romonet, +44 20 3906 7457 romonet.com March 2018 | 31
software & applications
Digital Dependence Software underpins our lives more than we know, every day. We rely on it to power our phones, manage our schedules, track our fitness, and even – in some cases – keep our hearts beating. Vishal Rai, CEO and co-founder of software support company Acellere, discusses why perfecting software development has such an impact.
ur reliance on technology is only growing – surely we can’t be far from integrating it into our bodies. Given this trajectory, it’s important that things work properly and efficiently. Functionality, at the end of the day, comes down mainly to software – the programming of whatever application you’re using. But software isn’t infallible, and when things go wrong, the effects can be catastrophic. If so much as one line of code is incorrect, the entire system can be thrown into chaos, leaving developers scrambling to diagnose the problem, fix it, and make sure it doesn’t happen again. This is easier said than done when there are millions of lines of code within a programme or application.
32 | March 2018
Crisis time Recently, the drama of Spectre/ Meltdown dominated headlines, exposing security flaws on every chip manufactured within the last 20 years. Just this month, we’ve also learned of what’s been dubbed ‘SpectrePrime’ and ‘MeltdownPrime’ – a more complex exploitation of these flaws. Because the issue is at chip-level, there’s no real short term solution, though companies like Microsoft and Intel are trying to patch their way out. The only real solution is a total remanufacturing of future CPUs. However, in the meantime, there are solutions to be had in making sure software is strong and functioning as it should. British Airways knows the pain that can be caused when systems aren’t running as they should. The
airline’s system crashed five times last year alone – the most serious crash of which was blamed on a power surge. Whether or not power supply was to blame, ultimately what drew out BA’s struggle was its inability to get its systems up and running again. By not having the right processes in place for developers and IT pros to investigate the issue within its software, the saga was dragged out and ended up costing £100 million in compensation packages.
Checking your work Cases like these, serve to illustrate the importance of having proper checks in place when developing software. In a time where so much depends on programming, it’s
Thinking of the future no longer enough to write good code – you also have to maintain it in order to protect against hackers, fraud and other glitches that might occur. Any weak spots within a programme’s code should be seen as liabilities – something that can and will be exploited if not fixed. So the question then becomes: How do we fix this? How can we make it easier for developers to spot weak points within software quickly so they can make the necessary changes in a timely manner?
Automating your intel One way to streamline this process is to implement artificial intelligence into your systems. AI technology, like that used in our Gamma platform, is able to regularly test systems and identify problems so that developers can more easily assess and fix bugs as they occur.
While AI is making software development and management easier, it should be caveated that even though it’s advancing, there are still limits to its use. The complexity involved in building software is such that, if you were to ask Alexa to help you build an application that could manage your investments and budget, automated processes would work off a set of pre-programmed or learned assumptions. Needless to say, this would be unlikely to fall in line perfectly with what you were looking for. AI definitely has a place in software development to help streamline businesses, applications, and ultimately, lives. But its role is much more nuanced than simply creating – it is best put to work as a safety measure, to check and correct what is broken.
“If so much as one line of code is incorrect, the entire system can be thrown into chaos.”
Using technology like AI will make software developers better at their jobs, and enable them to build better and more advanced code, software, and ultimately programmes and devices. So what can we expect in years to come? We’re already on a path to omnipresent tech – that is to say, technology that manages almost all facets of our lives. I don’t think it’s unreasonable to think that within a few years’ time we’ll also have tech implanted within our bodies. To some extent this is already happening with pacemakers and fitness trackers. As this tech advances, it becomes even more crucial that software runs smoothly and developers are able to make fixes quickly and easily. Agility and responsiveness will be the name of the game – our lives depend on it. After all, we would not want the software in our body to crash as often as our smartphones crash today. Acellere, +49 69 272 430 80 Acellere.com March 2018 | 33
data centre location
Best of British Will Heigham, head of office agency at Bidwells explains why choosing the right place for your data centre should remain a priority and reveals the top UK data centre hotspots.
here are several factors to consider before setting up a data centre, with location and proximity being two of the most important. Businesses will need easy access to their servers for maintenance or upgrades, while proximity to staff and clients is also crucial. IT staff may need to visit to replace equipment, make adjustments, or expand operations. With colocation services, businesses retain ownership of all their hardware and software, therefore it is important to have good transport links to and from the data centre.
34 | March 2018
What makes a good data centre location? In addition to a convenient location, data centres require plenty of space in which to operate, especially if clients decide to expand their business. Servers take up space, and if your data centre does not have sufficient space, clients may move their business elsewhere. Another consideration for data centre providers to take into account is energy consumption. According to NRDCâ€™s research, data centres are expected to reach 140 billion kWh by 2020. This is equivalent to 50 power plants.
That said, many data centres are looking into green energy and renewable resources like solar, wind and tidal power as alternatives to sustain operations. In the long run, alternative energy will not only help the environment but also be more cost effective. More and more data centres are therefore looking at locations where they can make the best use of these alternative energy sources.
Data centre UK hotspots The UK is a thriving area for data centres thanks, in part, to the fact that it is a major digital and
data centre location
London leads the way as the most popular data centre location in the UK
technology hub. The demand for data centres in the UK is high, with some of the most popular areas including: • London – The capital, which forms part of the Golden Triangle, leads the pack as the most popular data centre location in the UK. There are 71 data centres in the city – the highest in the UK. There are many reasons for providers setting up data centres in London, with its proximity to digital businesses and excellent transport links being two of the main drivers of demand. What’s more, some of the best universities and colleges are situated in London, giving data centres easy access to highly skilled graduates to ensure the highest level of performance and innovation. • Manchester – This city has become something of a technology hub and data centre providers have definitely noted its potential. Manchester is becoming a viable option for data centre operators to set up their businesses, boosted by the government’s Tech North start-up
initiative in the city. Again, this location also means access to individuals from some of the best universities and IT colleges, while the city’s internet speeds are also comparable to the best in the country, which is essential for providing a strong hosting service. • Berkshire – This city benefits from a thriving technology community as well as its proximity to London and business parks like Thames Valley Park and Arlington Business Park. Major tech companies have been set up in the Slough and Reading regions, providing easy access to potential clients. What’s more, Berkshire is more affordable than central London locations. Other hotspots worth mentioning include Birmingham, Newcastle and North Wales, although data centres can be found all over the country. If you’re looking globally, Cloudscene, a directory of data centres around the world, has released the top colocation ecosystems, with London taking the top spot and European cities Amsterdam and Frankfurt taking second and third place respectively. The top ten are: • • • • • • • • • •
London Amsterdam Frankfurt Washington DC Paris San Francisco Bay Area Los Angeles Sydney Dallas Chicago
While the United States is by far the largest data centre market, the fact that London leads the way and has a wellrepresented European population is encouraging, paving the way for cities like Manchester and Berkshire to follow in its footsteps. March 2018 | 35
data centre location
As a UK technology hub, Manchester is becoming a viable option for DC operators
City or rural â€“ Which is best? The debate about whether the city or country is better for data centre location continues, with the strongest argument for rural areas being cost. That said, there are other factors that should also be considered that rural areas may not be able to provide. These include fast and efficient internet connectivity, accessibility, convenience, and security. What these rural locations do benefit from is space, which is
Berkshire benefits from a thriving technology community as well as its proximity to London
36 | March 2018
often cheaper and more available in the countryside. Most data exchanges are between applications and not between users, which requires powerful connectivity and energy sources that out-of-town centres may not be able to provide. And with many companies moving towards the use of technological advancements like the Internet of Things and artificial intelligence, there is even more of a chance for the use of applications to grow. While predicting the future of technology is no easy feat, it is safe to say that the use of applications will only expand; therefore, putting your data centre closer to areas of advancement makes more sense. And since the future of data is cloud-based, city locations will give you faster access to the cloud. While staff access can be permitted remotely, the real concern is an emergency that requires IT staff to be on-site within minutes. If your data centre is in a remote location, gaining fast access in times of need could be a struggle. If you need to order hardware or
equipment, deliveries may also take longer to reach rural locations than city-based counterparts. Above all, the location of your data centre depends most on the applications you need to run. Fast connectivity and response times mean a city location will be ideal.
What about Brexit? The UK remains one of the largest data centre markets, with the 2017 Colocation Report stating that the country is becoming the go-to location for data centres in Europe. Investment interest also continues to grow. That said, Brexit could strongly influence the decisions of investors to look outside of the UK towards places like Dublin, Amsterdam and Frankfurt. Until then, the UK continues to be a highly valued location for data centres. With the Finance and Investment Forum monitoring new opportunities for emerging data centres and ecosystems in the country, it looks set to thrive. Bidwells, bidwells.co.uk
The Smart Building & Home Automation Trade Show
The EI Live! show is the only
dedicated custom install and
home automation exhibition in the UK, catering for every smart building professionalâ€™s requirements and purposely designed to give you a truly rewarding and informative show.
9th - 10th May 2018 Sandown Park, Surrey Learn more at www.essentialinstalllive.com
modular data centres
Modular Match Gordon Hutchison, vice president of international operations at Geist, explains why modular design and prefabrication within the data centre don’t necessarily have to be separate entities.
odularity began attracting attention in data centre circles in the mid-2000’s and for good reason. From entire containerised data centres to individual power and cooling components, modularity gave designers new building blocks – ones they could use to achieve efficiencies and economies over legacy designs. Interest in prefabricated equipment began around the same time and for the same reasons. Prefabrication enables a faster yet still standard scaling path by essentially swapping or ‘bolting on’ equipment as needed to
38 | March 2018
support build-outs. More recently, prefabrication has gained traction for edge and power applications as cost and speed to deployment pressure increases. While modular design and prefabricated equipment both have particular use cases, they are not mutually exclusive. In fact, each reinforces the benefits of the other. For example, companies have become willing to trade specific, custom configurations for the speed and predictability associated with prefabrication. Using modular design for components within prefabricated containers, or equipment, adds flexibility to otherwise preset capability.
Containers and equipment can still be prefabricated in quantity using specifically designed components. However, when those components become modular, post-deployment upgrades can be accomplished with a ‘hot upgrade’, or a simple pull and swap of small, accessible components, versus the costly replacement of an entire device, which may require interruption of service.
Modularity in power infrastructure As modular design has evolved, ever smaller components have entered the mix, enabling easier upgrades,
modular data centres
expansions, and redundancy planning at lower levels of data centre infrastructure – while keeping costs lower in initial build-outs. Modular uninterruptable power suppliers (UPS) initially targeted small-to-medium applications topping out around 100kVA. These UPS products provided a fixed amount of capacity to start (e.g. 20kVA) but included built-in headroom to upgrade to 100kVA as the data centre or prefabricated container required more capacity. Another example of power modularity within lower-level infrastructure is overhead busways with tap-off boxes. These products bring a track lighting-type simplicity to rack power feeds. As long as the busway is appropriately sized, users can support a bevy of power configurations in a given row of equipment cabinets; enabling, say a single-phase 20-amp feed to be deployed next to a three-phase feed. This greatly reduces time and labour in environments facing high server churn. With an overhead busway design, the old and new tap-off boxes can be placed by data centre personnel in minutes, versus the old labour-intensive method of adding or changing circuit breakers in a panelboard.
An example of a hot-swappable upgradable modular unit.
“While modular design and prefabricated equipment both have particular use cases, they are not mutually exclusive.”
Overhead busways with tap-off boxes are an example of power modularity within lower-level infrastructure.
At its heart, modularity is about component parts – understanding what has normally been hard-coded – then engineering those pieces to be flexible and interchangeable. This philosophy is a tenet of the Open Compute Project (OCP) – an industry consortium that advocates innovative, open-source designs for data centre hardware and complete facilities. In their Open Rack Charter, the group wrote: “The intent is to enable what we term component disaggregation [where] compute components can be swapped and expire according to their own life cycles without replacing the entire server for an upgrade or repair.”
Modular rack PDUs embody open compute ideals A rack PDU, embodying this disaggregation, is the next logical focus for modular design. Rack PDUs have historically been segmented by technology into the lexicon of basic, locally metered, input metered, outlet switched, outlet monitored or outlet switched and monitored. As users progress from one type of PDU to the next, functionality increases, and, so does cost. But, to date, the upgrade path has primarily been to rip and replace — which is the hardware model the OCP is trying to counter. An alternative to this course is a PDU with hot-swappable intelligence – modules that enable feature upgrades while retaining the same PDU chassis. This path recognises that basic PDU users today may have a different outlook in two years;
the business need may be more complex and IT may require more visibility. Through hot-swappable intelligence, users achieve a degree of future-proofing; the PDU becomes an investment that can be repurposed as power management priorities change, even within specifically designed prefabricated containers. OCP’s ideal of ‘disaggregation’ is achievable through a new PDU model — one that includes both a ‘facility PDU’ and an ‘IT PDU.’ The facility PDU would include circuit breakers and intelligence; the IT PDU would include only outlets. As IT hardware is refreshed, the simple IT PDU can be exchanged and reconnected to the original facility PDU. Economically, the facility PDU, containing the most expensive component parts, would be purchased only once. And the IT PDU, freed of breakers and intelligence components, could be built with an optimised form factor, outlet dense with a low profile. Prefabricated containers, offering instantly deployable allin-one capability, will continue to gain market traction, particularly in proliferating edge deployments. Large, influential data centre operators will continue to break down the facility and its components into bare metal, open source components, valuing utility and flexibility over other considerations. The data centre’s modular evolution supports both trends, building in flexibility and efficiency, while driving down overall costs. Geist, geistglobal.com March 2018 | 39
data centre metrics
Made to Measure David Trossell, CEO and CTO of Bridgeworks, examines the various metrics that should be considered when measuring data centre performance and efficiency.
raditional data centre metrics include the power usage effectiveness (PUE), uptime, downtime, network performance to and from the data centre, heat gain/loss, OPEX, solutions provided, back-up and restore times, SLAs, etc. Each metric measures a different aspect of data centre performance. Industry group The Green Grid developed Power Usage Effectiveness (PUE) as a metric in 2007, and now it has created another one called Performance Indicator (PI). The difference between PUE, which is mainly concerned with power usage, and PI is that the latter is more concerned with data centre cooling. Both metrics are designed to measure efficiency, performance, and resilience, however. So, if you raise the temperature on the data centre floor, you can gain energy efficiency, but if the temperature
40 | March 2018
is too high or too cold, the result could be calamitous – leading to hardware failure. Redundant servers, data and network latency can also impact efficiency in other ways. Nonetheless each piece of equipment within a data centre can have a measurable impact on whether a data centre is efficient, with good performance and resilience.
Metric abundancy Other data centre metrics that need to be considered when measuring performance and efficiency include the following: • Compute Units Per Second (CUPS) • Water Usage Efficiency (WUE) • Data Centre Infrastructure Efficiency (DCIE) • Carbon Usage Efficiency • CPU Utilisation • I/O Utilisation • Server utilisation – which traditionally tends to be very low.
• Total Cost of Ownership • Capital Expenditure (CAPEX) • Operational Expenditure (OPEX) • Transaction Throughput and Response Times • Return on Investment (ROI) • Recovery Time Objectives (RTO) • Recovery Point Objectives (RPO) It’s worth noting at this juncture that a data centre can still be inefficient even if it has a great PUE and WUE rating. This emphasises the need to look holistically at how data centre efficiency and performance is measured. Measuring performance is no easy feat. For example, servers that are fully loaded for the sake of increasing utilisation for no financially justifiable reason, would increase operating costs. So, this would arguably make the data centre inefficient if performance is defined as being an equation of how much work is completed against all the resources used to complete the task(s).
data centre metrics
Case Study: CVS Healthcare By accelerating data, with the help of machine learning, it becomes possible to increase the efficiency and performance of a data centre. CVS Healthcare, is but one organisation, that has seen the benefits of taking such an innovative approach. The company’s issues were as follows:
Holy Grail metrics Eric Smalley, writing for Wired magazine about ‘IT Minds Quest for Holy Grail of Data Centre Metrics’ explains the challenge, “Settling on a metric is only a step toward solving the even bigger challenge of calculating a data centre’s value to the business it serves. In the internet age, as more and more of the world’s data moves online, the importance of this task will only grow. But some question whether developing a data centre business-value metric is even possible. As you move up the chain from power systems to servers, to applications to balance sheets, the variables increase and the values are harder to define.” The Holy Grail is, therefore, to find a standard way of measuring useful IT work per unit of energy concerned. Yet, the overall performance of a data centre requires much detailed analysis to gain a true picture of whether a data centre is highly efficient and performant. Nothing can be taken for granted and analysed in complete isolation. For example, data centres are dead without a network infrastructure and connectivity. So, there needs to be a view of how well they mitigate data and network latency. To achieve increased speed, WAN data acceleration solutions such as PORTrockIT are needed.
“Nothing can be taken for granted and analysed in complete isolation.”
• Back-up RPO and RTO • 86ms latency over the network (>2,000 miles) • 1% packet loss • 430GB daily backup never completed across the WAN • 50GB incremental taking 12 hours to complete • Outside RTO SLA - unacceptable commercial risk • OC12 pipe (600Mb per second) To address these challenges, CVS turned to a data acceleration solution, the installation of which took only 15 minutes. As a result, it reduced the original 50GB back-up time from 12 hours to 45 minutes. That equates to a 94% reduction in back-up time. This enabled the organisation to complete daily back-ups of its data, equating to 430GB, in less than four hours per day. So, in the face of a calamity it could perform disaster recovery in less than five hours to recover everything completely. Any reduction in network and data latency can lead to improved customer experiences. However, with the possibility of large congested data transfers to and from the cloud, latency and packet loss can have a considerable negative effect on data throughput. Without machine intelligence solutions, the effects of latency and packet loss can inhibit data and back-up performance.
WAN data acceleration With WAN data acceleration, it becomes possible to locate data centres outside of their own circles of disruption without increasing data and network latency - and much further away than is traditionally achieved with WAN optimisation. Packet loss can also be reduced too, and so it’s now possible to have highly efficient and performant data
centres, as well as disaster recovery sites, situated in different countries right across the globe without impinging on their performance. So, arguably, the proximity of data centres also needs to be considered whenever data centre efficiency and performance is calculated. Part of the analysis may also require data centres, and their clients, to calculate the risks associated with their location versus the cost savings, customer satisfaction and profitability gains that can come with business and service continuity. It’s therefore important to use a wide variety of metrics to define how well a data centre performs, including green credentials. Bridgeworks, 4bridgeworks.com March 2018 | 41
No Interruptions According to Richard Clifford, head of innovation at Keysource, data centre solutions provider, operators could be using their power infrastructure to generate revenue — without sacrificing disaster recovery processes.
he UK energy market has seen significant price increases over recent years – as much as 62.6% between 2006-2016, according to price comparison provider Selectra. The market’s volatility was highlighted in December 2017 when wholesale gas prices hit their highest levels in six years, due to supply disruption in Europe. As such, it’s becoming increasingly important for volume energy users to consider innovative ways to reduce costs and ensure they remain competitive.
42 | March 2018
Battery storage has been billed as the missing piece of the puzzle in addressing the word’s energy challenges. Companies like Tesla are investing in research to create cutting-edge batteries for homes and businesses that store energy from renewables. But energy storage is nothing new. Similar technology lies within Uninterruptable Power Supply (UPS) systems, which are already used by the vast majority of data centre owners. Using these systems in new ways could allow the sector to guard against the rapid price changes in the energy market.
UPS systems have been used in data centres for decades, but it’s only recently that operators have started to consider this infrastructure as a potential source of revenue generation, by taking advantage of National Grid’s Firm Frequency Response (FFR) incentive. On a basic level, UPS draws in energy while infrastructure is running and automatically switches to power the data centre – keeping systems live in the event of a failure. The stored energy is effectively a reserve that rarely gets used. However, some operators are now using this reserve to power their IT systems when energy prices are at their highest, switching back to mains supply when prices are lower. In doing so they’re able to take advantage of the best tariffs. The obvious question is whether doing this will affect disaster recovery. Using UPS in this way means more frequent recharging and should a power failure happen at this time, there could be a gap in uptime. For some data centre operators, this risk has been enough to completely remove any interest in using UPS for battery storage. There’s an easy fix, however. By using two systems – one for disaster recovery and a second to reduce costs – operators can avoid risk and assure customers and
stakeholders that there will always be back-up power available. The application of this is relatively straight forward. One UPS system sits below the infrastructure and draws in energy which can be routed back to power IT in the event of a failure – purely used for disaster recovery. Meanwhile, a second is connected further upstream, at the transformer. This second UPS simply stores energy directly from the grid as the IT infrastructure is powered from the mains. It then can act as the battery for use when tariffs are high, helping generate savings without the risk. Operators that have made the change to using UPS in this way have achieved savings in the region of 5-10% on their energy bills. And the model doesn’t just allow for savings. Data centre operators can generate revenue by selling stored energy back to the grid too thanks to incentives like the FFR – a framework that allows third parties to feed back to the grid. Ultimately the viability of this strategy depends on the data centre and the operator. Among the things to consider is an increase to operational cost. If UPS systems are being used more frequently it means that maintenance of these systems may have to ramp up. And there is obviously the capital
expenditure that comes with investing in a second UPS system and connecting this to the existing data centre infrastructure. Yet the case for doing so is compelling. Margins in the sector are tightening – some are nine times less than they were a decade ago. Meanwhile, energy is still among the largest overheads businesses face – anywhere from 25-60% of running costs, according to trade association Intellect. The good news is that data centre demand has never been higher, due to the increasingly business critical nature of IT systems and growing demands for the rapid accessibility of data. But this means that energy costs will likely continue to be a pressure point which restricts some operator’s ability to grow. Using UPS to generate savings is not a panacea for the sector’s challenge, but it is an example of the sort of small changes data centre operates can make to ease some of the pressure. Keysource, +44 (0) 345 204 3333 keysource.co.uk
“Energy is still among the largest overheads businesses face – anywhere from 25-60% of running costs.”
March 2018 | 43
Projects & Agreements
WinMagic and LogicDS team up to simplify and secure operating system lifecycle management WinMagic and LogicDS are partnering to jointly offer WinMagic’s SecureDoc encryption and key management, with the SWIMAGE OS migration and imaging solution. This jointly-deployed solution will significantly reduce the headaches and security concerns organisations have when deploying new devices into their network, performing OS migration, and handling maintenance issues. SWIMAGE enables OS migration while encrypted with SecureDoc, increasing security posture, and providing a truly unique solution that no one else can offer. Managing the lifecycle of operating systems is a constant nuisance in large organisations. Existing processes are often very manual, lack coordination, and are heavily reliant on endusers. These challenges result in some of the biggest and most expensive pain points a company goes through, and can lead to significant data security issues, including loss of data, breach, and even compliance failure. By teaming up, WinMagic and Logic DS will significantly reduce these concerns and complexities, allowing organisations to focus on what’s most important – growing their business. WinMagic, www.winmagic.com
44 | March 2018
CNet working in Global Education partnership with BroadGroup CNet is now the official Global Education Partner with media technology company, BroadGroup and their global Datacloud events series. Together, CNet and BroadGroup will work to unite industry leaders and influencers to raise awareness of the importance of competence and confidence within the digital infrastructure industry, and how it can play a key role in making a difference within the data centre sector. CNet deliver data centre and network infrastructure education across the globe. The wide range of education programmes form the Global Digital Infrastructure Education Framework from a level three up to the world’s only Master’s Degree in Data Centre Leadership and Management. BroadGroup has grown to be one of the most recognisable names throughout the industry, by successfully building and hosting a number of events aimed at professionals within the data centre, IT infrastructure and cloud industry. As a result, it has built a reputation respected by sponsors, attendees and speakers. Datacloud events bring together leading professionals from across the industry, allowing for networking, learning and attending insightful presentations and participation in panel sessions. CNet will be present at all of the Datacloud events, including the popular Datacloud Europe and Datacloud Awards held in Monaco every June, where CNet will be speaking on day one (June 13). Other Datacloud events this year will be held throughout Europe, Asia and in Africa. BroadGroup, www.broad-group.com
Quby relies on Databricks for collaborative approach to analysis of Internet of Things data Databricks has announced that Quby, the Dutch company behind smart energy product, Toon, has deployed Databrick’s Unified Analytics Platform to support its analytics and collaboration strategies as it grows the number of energy providers that it works with across Europe. Quby’s Toon is a smart thermostat and energy monitoring device that can provide insight into how much energy is being consumed within each customer’s location. With multiple utility customers across Europe, Quby had rapidly growing volumes of data that had to be managed in separate instances. At the same time, Quby’s internal data engineering team wanted to reduce the amount of work required to prepare data for analysis, while the data science team wanted to conduct deeper analysis too. Quby selected the Unified Analytics Platform to make data preparation easier while maintaining security and management requirements. “We knew we wanted to move into the cloud rather than managing our internal instances. Rather than taking care of the infrastructure, we could concentrate on how to make that data available to the business,” explains Telmo Oliveira, systems engineer at Quby. Databrick’s Unified Analytics Platform helps data engineering, data science and business teams collaborate around big data. Using Apache Spark alongside collaboration workflows and management tools, the platform supports companies deploying new and innovative services. Databricks, www.databricks.com
Projects & Agreements
RSPB spreads its wings thanks to Pure Technology Group Pure Technology Group (PTG) has announced that it has been working with the RSPB over the last year to develop its plan to create a more flexible and mobile workforce. The result is a contract for endpoints (laptops and two-in-ones), peripherals, support and deployment over the next three years. The RSPB is focused on technical progression with true value foremost. The RSPB staff and volunteers can share their workspace via the new devices across the UK, providing flexibility to transform from a traditional workspace and introduce a more collaborative and productive IT environment. Fundamentally this will help its mission to save nature and maximise its budget. During the planning process, all requirements were carefully analysed for environmental impact, which was hugely important for the RSPB. All logistics, packaging, training and disposals are planned with the lowest carbon footprint possible. The RSPB is a progressive, forward-thinking organisation and it is an honour for PTG to work with the charity on such a ground-breaking project. Following an exacting tender process including device evaluations, vendor trials, environmental impact assessments and commercial bid, the RSPB chose PTG as the right partner to work with it on the project.
Trilogy Technologies partners with ScienceLogic to accelerate opportunities in AI and IoT Trilogy Technologies and ScienceLogic have announced a partnership to offer customers smart monitoring services that will futureproof them for advancing technologies. When it comes to emerging technologies like artificial intelligence (AI) and the Internet of Things (IoT), the number one question is – how can it be monitored? ScienceLogic’s predictive and proactive analytics single platform for monitoring a diverse range of technologies, will give Trilogy’s customers the opportunity to tap into these emerging technologies to expand their businesses and ultimately increase revenue. Through real time monitoring, Trilogy will use predictive analysis to ensure services are highly available and minimise impact on customer experience. Similarly, if something has gone wrong, use proactive analysis to quickly identify root cause, prevent repeat occurrences and minimise potential revenue loss for that company. This approach will ultimately improve productivity and increase savings for all concerned. Dave Link, CEO at ScienceLogic says, “It is great to be working with such an innovative and forward-thinking company like Trilogy. The possibilities for opportunities are endless. Uninterrupted service availability and assurance are important to customers when choosing a technology partner to manage their hybrid IT systems.” Trilogy Technologies, www.trilogytechnologies.com ScienceLogic, www.sciencelogic.com
L-R: Richard Chart, co-founder ScienceLogic, Edel Creely, group MD, Trilogy Technologies and Dave Link, CEO, ScienceLogic]
Equus partners with HGST for high density all-flash storage Equus Compute Solutions has partnered with HGST to offer their line of JBOF (Just a Bunch of Flash) all-flash storage systems. Gartner estimates that by 2020, 50% of data centres will use only solid state arrays for primary data. The HGST 2U24 Flash Storage Platforms are designed for Software-Defined Storage (SDS) infrastructures. These innovative JBOF systems balance performance with capacity, delivering high IOPS, and low latency at up to 184TB capacity. The HGST JBOF systems are a cost-effective way to jumpstart the migration to an All Flash data centre migration. There are two 2U24 chassis configurations: The SATAbased chassis supports up to 24 CloudSpeed Gen 2 2.5in SSD modules for up to 46TB capacity, and the SAS3 chassis supports 24 Ultrastar 2.5in SSDs for up to 184TB. Both systems can start with as few as 12 SSDs and upgrade one additional SSD module at a time. Key features include up to six 12Gb/s SAS3 connections to the host system, that supports up to 4.7M IOPS with <1ms latency. “We are excited to offer these HGST JBOF All-Flash systems that deliver maximum performance for our customer’s specific workloads,” says Steve Grady, Equus VP customer solutions. “The 2U24 Flash storage platforms address the demanding storage needs of large enterprises and cloud service providers who require dense, shared flash storage.” Equus, www.equuscs.com March 2018 | 45
Projects & Agreements
du and Epsilon launch seamless connectivity from the UAE to data centres across the globe UAE-telco ‘du’, from Emirates Integrated Telecommunications Company, has announced that it has extended its existing partnership with Epsilon, a privately owned global communications service provider, to offer end-to-end connectivity between the UAE and major data centres in the US, Europe and Asia. Combining the strengths of du and Epsilon will create a simple, effective and seamless ‘one stop shop’ for capacity and backhaul services, allowing cost effective access to datamena – an EITC entity, and the UAE Internet Exchange (UAE-IX) from global data centres. Carriers, enterprises, content and cloud providers will be able to take advantage of the Infiny by Epsilon on-demand connectivity platform, which allows click-toconnect provisioning of Ethernet speeds from 100M up to 5G. “This commercial partnership seamlessly connects datamena in the UAE to all major data centres in the USA, Europe and Asia. Epsilon has over 500 Carriers connected to its network who can now manage their connectivity requirements to datamena via Epsilon’s online provisioning portal. This will significantly reduce the current lead time to provision capacities and further enhance the customer experience in datamena,” says Ananda Bose, chief wholesale and corporate affairs officer, Emirates Integrated Telecommunications Company. Epsilon, www.epsilontel.com
46 | March 2018
Telehouse America and Sabey Data Centers announce strategic alliance Telehouse America has announced its partner alliance with Sabey Data Centers, a Seattle-based company and one of the largest privately-owned multi-tenant data centre developers and operators in the world. The strategic alliance allows both companies to extend their data centre footprints and fortify their respective connectivity ecosystems. “Telehouse America and Sabey’s complementary offerings and business models provide a comprehensive data centre solution to varying customer requirements,” says Sandra de Novellis, head of partner alliances at Telehouse America. “Telehouse America offers primarily retail colocation space, serving customers in the New York and Los Angeles markets; globally, we also operate 47 data centres throughout EMEA and APAC. Sabey’s wholesale offerings extend our solutions portfolio, accommodating growing demand for multi-megawatt data centre space in markets such as Ashburn, Seattle and New York.” Telehouse America has invested over $10 million in capital expenditures and improvements at its New York Teleport data centre in the last 24 months, including enhancing and expanding power and cooling systems as well as upgrading security and disaster recovery capabilities, including Amazon Web Services (AWS), Microsoft Azure, and Google Platform. “We are excited to partner with Telehouse America, extending to our customers access to a global footprint in key hubs such as Frankfurt, London and Hong Kong,” says Daniel Meltzer, vice president of sales and leasing at Sabey Data Centers. Telehouse America, www.telehouse.com Sabey Data Centers, www.sabey.com
Riverbed partners with Microsoft and Arrow to optimise Azure connectivity Arrow Electronics has announced an enterprise cloud networking and visibility solution bundle based on technologies from Microsoft and Riverbed Technology. The new solution is focused on simply and securely connecting a company’s infrastructure and branches to Microsoft Azure. Initially, the solution will be available in France, Germany, Netherlands, Scandinavia and the UK. Arrow’s latest cloud innovation enables value-added resellers and cloud solution providers to offer their end customers a globally orchestrated and optimised connection to cloud computing platform Microsoft Azure by leveraging Riverbed SteelConnect, a software-defined wide area networking (SD-WAN) solution, Riverbed SteelHead for hybrid WAN optimisation and Riverbed SteelCentral Aternity for end-user experience monitoring. In the light of the growing usage of cloud services for the development and deployment of new applications and the hosting of migrated applications, Arrow provides the channel community with this hybrid cloud architecture for modular, distributed applications from a single source. Microsoft Azure is one of the most used global cloud computing platforms for application hosting. Riverbed’s cloud networking solutions optimise the performance of applications across the wide area network and in the cloud, and provides the agility needed in today’s modern enterprise. Arrow, www.arrowecs.co.uk
Projects & Agreements
GTT to acquire Interoute
Volta Data Centre announces new partnership with IX Reach Volta Data Centre, a central London carrier-neutral data centre, has announced a new partnership with global network solutions provider IX Reach. As part of the deal, Volta will be providing the space within its data centre which will enable IX Reach to offer its full range of solutions to Volta’s enterprise and carriers tenants, including its direct Cloud Connect service to Amazon AWS, Google Cloud Platform and Microsoft Azure, all from Volta’s scalable facility. IX Reach partnered with Volta after being impressed by its highly secured facility and ability to house scalable solutions to accommodate global businesses, along with its resilient power and UPS systems, along with its central location within the capital. Stephen Wilcox, president of IX Reach says, “Volta’s data centre is in a great location for our central London customers who can’t afford for their network to suffer latency issues. We had started to notice a high number of enquiries coming to us from prospects asking if we offered services within Volta so we knew this was a PoP we needed to look into as the demand was clearly there and we didn’t want to miss out. We are thrilled to have added this partnership to our global network of over 170 data centres.” Volta Data Centre, www.voltadatacentres.com
Packet expands presence to Interxion’s Marseille campus Interxion has announced that Packet Host has expanded its presence to Interxion’s Marseille data centre campus. Packet is a leading bare-metal cloud for developers. Its proprietary technology automates physical servers and networks without the use of virtualisation or multi-tenancy – powering over 60k deployments each month from 16 global locations. By expanding into Interxion’s Marseille facility, Packet will enable its customers to run compute and data intensive workloads closer to their end users. “Packet expanding its presence to Interxion Marseille is further evidence of the location becoming a hub for cloud and content distribution platforms,” says Vincent in’t Veld, vice president, platforms at Interxion. “With over 130 carriers, four internet exchanges and 13 subsea cables, Interxion’s Marseille campus will support the rising demand for Packet’s services around the world.” Packet has seen a rapid increase in demand for its services from industries including online gaming, video streaming, IoT and telecoms. As the Mediterranean hub for content distribution and Europe’s gateway to the Middle East, Africa and Asia, Interxion’s Marseille campus will provide the connectivity Packet needs to reach its customers in these regions. Interxion, www.interxion.com
GTT Communications has announced a definitive purchase agreement to acquire Interoute, operator of one of Europe’s largest independent fibre networks and cloud networking platforms, for approximately €1.9 billion ($2.3 billion) in cash. “The acquisition of Interoute represents a major milestone in delivering on our purpose of connecting people, across organisations and around the world,” says Rick Calder, GTT president and CEO. “This combination creates a disruptive market leader with substantial scale, unique network assets and awardwinning product capabilities to fulfill our clients’ growing demand for distributed cloud networking in Europe, the US and across the globe.” “This is an exciting next chapter for Interoute, GTT, our customers and our team,” says Gareth Williams, Interoute CEO. “The combined assets and strengths of our two companies create a powerful portfolio of high-capacity, low-latency connectivity, and innovative cloud and edge infrastructure services to support our customers in the global digital economy.” Interoute has received the strong support of its shareholders — the Sandoz Family Foundation, Aleph Capital and Crestview Partners — in its strategy of building and consolidating the European fiber, cloud and connectivity markets to create a player with significant scale and international presence. GTT, www.gtt.net March 2018 | 47
Projects & Agreements
TIM and Cisco join forces to increase the IT security of Italian companies As of March, TIM, a provider of mobile and fixed telecommunication services in Italy, in partnership with Cisco will offer ‘TIM Safe Web’, a highly secure platform service, integrated in the TIM network, able to safeguard small business users from malware, such as ransomware, phishing, and other malicious cybersecurity activities. TIM Safe Web combines TIM’s unique technologies and information with the capabilities of the Cisco Umbrella cloud-based security platform, which leverages a constantly updated global database of threat intelligence. This ‘over-the-network’ service enhancement will be provided to about 600,000 TIM business customers in order to improve both security and quality of service. The protection, including antiphishing and malware containment, will be active on every system connected to the customers’ LAN for the application of the security policies at DNS (Domain Name Server) level, thus blocking requests to dangerous IP addresses before the Internet connection is up. Cisco Umbrella analyses over 125 billion DNS requests per day in 160 countries worldwide and proactively blocks almost every request to malicious destinations – offering a ‘clean pipe’ for end users. The initiative represents the first concrete step towards the implementation of the Memorandum of Understanding signed recently in Barcelona, which also provides the skills and solutions sharing to support the digital transformation of companies, public administration and public services. Cisco, www.cisco.com 48 | March 2018
Eneco chooses Greenbyte for data management Dutch-based utility Eneco has chosen Greenbyte’s modern software system, Breeze, for wind farm monitoring, analysis and reporting. Eneco, headquartered in Rotterdam, The Netherlands, is operating in the field of sustainable energy. Eneco takes on a leading role in the energy transition by connecting partners and customers so that they can work together on smart energy solutions and innovative products and services. After intensive market research and consultation, Eneco decided to implement Breeze into its wind farms. The Greenbyte services will be used by Eneco to monitor and optimise about 1GW of wind energy. “We found Breeze to be the most suitable solution on the market and the one that best meets our current needs. The Greenbyte team is easy to work with and the system is continuously developed at a rapid rate.,” says Mathieu Meijer, head of asset management and operations onshore wind at Eneco. “We are very happy to have met Eneco’s high requirements and we are very excited to start working with them. They are a very modern and innovative company who really wants to make a difference and constantly seeks new ways to increase production and assets performances,” adds Jonas Corné, CEO at Greenbyte. Eneco, www.eneco.com
Cloudian secures funding commitment of $125 million in joint venture with Digital Alpha Cloudian has announced a $125 million joint venture with Digital Alpha to accelerate adoption of Cloudian’s enterprise object storage systems. The joint venture includes a utility financing facility of up to $100 million from Digital Alpha and its Limited Partners to enable flexible procurement options to accommodate customers’ rapidly growing storage environments. It also includes a $25 million equity commitment to support expansion of Cloudian’s sales, marketing, engineering, and customersupport organisations. The $100 million financing facility will enable consumption-model procurement of Cloudian products. Today, IT groups are under pressure to deliver services on demand, free of up-front capital outlays. While the public cloud is right for some of these workloads, others are best kept on premises. Through this partnership, customers can take advantage of an on-premises solution that provides the data sovereignty, performance and control of enterprise storage together with the economics of a pay-per-use model. This joint venture will capitalise on the dramatic growth of object storage in 2018, which is driven by the convergence of massive data growth, the emergence of the Amazon S3 API as a de-facto industry standard and accelerating artificial intelligence/ machine learning and Internet of Things (IoT) use cases. “For our customers, this venture will enable flexible procurement models and new hardware options to accelerate their transition to next-generation object storage solutions,” says Cloudian CEO Michael Tso. Cloudian, www.cloudian.com
enclosures cabinets & racks
Next Timeâ€Ś As well as its regular range of features and news items, the April issue of Data Centre News will contain major features on cabling and enclosures, cabinets and racks. To make sure you donâ€™t miss the opportunity to advertise your products to this exclusive readership, call Ian on 01634 673163 or email Ian@allthingsmedialtd.com.
data centre news
March 2018 | 49
company showcase SPONSORED STORIES FROM THE INDUSTRY
Rockley Photonics demonstrates data centre routing ASIC with integrated optics Rockley Photonics is demonstrating its in-package-optics platform to select customers and development partners. Rockley has developed the world’s first single-chip L3 routing switch to directly integrate 100G network ports using single-mode fibre. Leveraging Rockley OpticsDirect technology, this platform is among the first of its kind to integrate high-speed optical interconnect with high-scale CMOS digital and mixed-signal circuits. The switching ASIC employs unique low-
power custom circuitry to interface to Rockley-developed photonic chips. Advanced technology for fibre-optic assembly delivers significantly lower power than conventional ASICs that use electrical IO and external optical transceiver modules. By driving optical fibre directly from the package, the product can directly connect to devices placed 500m away, offering a 100x improvement over today’s ASICs using all-electrical IO. The Rockley Photonics platform employs innovative technologies while leveraging established assembly processes to deliver a novel solution that is both highly advanced, while suitable for high-scale manufacturing. The product employs an L3 routing ASIC designed by Rockley Photonics and silicon photonics chips manufactured on a Rockley proprietary process.
The switch ASIC and silicon photonics ICs are placed into a customised package enabling fibre-optic links to connect directly into the assembly. Innovative mixed-signal circuit designs enable the photonics ICs to be driven directly by the ASIC without additional chips. “The Rockley Photonics OpticsDirect System represents a culmination of multiple years of development effort by the entire Rockley team as well as our ecosystem partners,” said Dr. Andrew Rickman, founder and CEO of Rockley Photonics. “We are pleased and excited to deliver this important new platform which will usher in the modern era of integrated optics.” Rockley Photonics, www.rockleyphotonics.com
Edgecore Networks introduces the AS7816-64X to reduce operating costs of DC applications Edgecore Networks has announced the AS7816-64X, an energy efficient 100GbE high-performance 2U rackmountable Top-of-Rack (ToR) or spine data centre switch. This new ToR switch is designed for hyper-scale cloud environments and high-frequency trading applications, as well as cloud service and telecom service providers. The Edgecore AS7816-54X is a non-blocking network switching device with a greener and more powerful design. It provides line-rate L2 and L3 switching across its 64 QSFP28 ports. Each of the 64 QSFP28 ports can support 1 x 100GbE, 1 x 40GbE, or via breakout cables, 4 x 25GbE or 4 x 10GbE. Moreover, the high port density provides maximal performance that can meet the requirements of high bandwidth applications. The switch’s formidable capability ensures seamless network transmissions to fit the challenging demands of today’s data centre workloads. For hyper converged data centre interconnections, the Edgecore AS7816-64X supports flexible and scalable configurations. And to guarantee performance stability under all levels of system loading, the switch is equipped with a high-performance embedded CPU, 16GB memory and a reserve mSATA connector. The AS7816-64X utilises sophisticated packet buffer management that can reject arriving packets and pre-empt packets that were previously inserted into the buffer. When deployed in a CLOS topology with redundancy, equal path, and load balancing technology, the AS7816-64X ensures faster recovery from failed links and enhances overall network stability and reliability. The AS7816-54X switch is designed to be a ToR workhorse that can meet and exceed the demanding burden of data centre operations. Its innovative space-saving 2U rack design can reduce both operating costs and capital expenditures delivering a cost-effective solution for exchange deployments. Edgecore Networks, www.edge-core.com
50 | March 2018
company showcase SPONSORED STORIES FROM THE INDUSTRY
Geist announces new version of DCIM software Geist, a division of Vertiv and provider of intelligent power and management solutions for data centres, has announced version 4.8 of Environet, its popular data centre infrastructure management (DCIM) software. Environet is best known for exceptional real-time monitoring of data centre equipment in facilities ranging from hyperscale to edge environments. Environet 4.8 offers faster and more intuitive access to comprehensive data centre metrics through a streamlined user interface featuring improved navigation and visualisations. The new version also adds HTML compatibility, expanding its range of supported browser platforms. “We continually tune our products to reduce complexity,” said Matt Lane, Geist vice president of customer experience. “Quick access to timely information is vital for data centre personnel. We have re-engineered the Environet interface to provide the fastest possible access to critical data sets and added features designed to automate common but time-intensive tasks.” These new features include more granular device control for maintenance procedures and enhanced tenant power management. The improved maintenance control reduces the number of nuisance alarms by inhibiting the alerting function for a device with a known maintenance event. In addition, the maintenance manager tracks previous activities, keeping a historical record of the work completed on each piece of equipment. Expansions to tenant management capability ties power use to individual tenants on a day-to-day or monthto-month basis, making point-in-time and historical power data use more transparent and trackable. This data can be exported to a billing system, simplifying billing and SLA compliance for both the tenant and service providers. Environet 4.8 is available immediately and is a free upgrade to customers with a current support contract. Geist, www.geistglobal.com
Aermec’s modular chiller offers a new way to cool Aermec UK has raised the bar in chiller design with the launch of a stackable water-cooled modular chiller ideal for data centre applications. The WWM chiller has been engineered to provide optimal performance levels whilst delivering energy efficiencies and maximum flexibility through its modular design. It also offers a small footprint and enables cooling capacities to be increased over time to suit changing requirements. The WWM features dual circuit units, reversible water side with hermetic scroll compressors and source side plate exchanger. Ease of maintenance has not been overlooked, the WWM provides total accessibility; the refrigerant components can easily be reached as they are located in a draw that slides out from the front for easy service and maintenance. Flexibility underpins the entire design. The WWM provides a small footprint of 1,321mm (H) x 1,331mm (L) x 1,151mm (W) and a weight of 676kg. The WWM’s size not only frees up valuable floorspace but enables it to fit through standard sized doors, into lifts, simplifying access and installation processes. A maximum of 32 chillers can be linked together hydraulically, either side-by-side or back-to-back, they can also be stacked on two levels (up to 16 on each level), offering flexible layout options and keeping the overall unit dimensions to a minimum. An optional power bar facilitates single or multiple point electrical connections and each module has its own electrical panel and control logic; it can be operated and controlled as an individual chiller. Aermec, www.aermec.co.uk
March 2018 | 51
Migration Myths Justin Day, transformation director at 6point6, discusses the top five misconceptions when it comes to cloud migration.
or many companies, migrating to the use of cloud services is already well underway, and, for those that have not yet begun their cloud journey, the beginning is near. It is an exciting time for many businesses who can now capitalise on all of the possibilities that cloud computing has to offer. Like any major alteration though, there are lessons to be learned from those that have ‘seen the movie’ before. While there a number of areas that necessitate significant assessment, we have listed the top five incorrect assumptions that can lead to problems when migrating to the cloud.
2. The Internet supports all my needs The ever-present nature of cloud services lends a lot of its capability to the internet. This typically allows easier access to services and a more flexible platform with which to manage them, however attempting to transfer a service to be consumed in this way is not always the correct thing to do. Many existing applications are better served by older protocols, a lot of which do not have suitable encryption or security considerations. Some applications do not perform across NAT boundaries. The consideration should be that a move to cloud can often be facilitated even without internet connectivity.
1. Everything can move to the cloud
3. Networking is dead
Many businesses are being driven to begin the use of cloud services without an appropriate understanding of what they are really attempting to accomplish. Without a proper cloud readiness assessment this can seriously delay their progress, and, in some cases, it may even require the migration to be aborted or reversed. Each system, application, service, and/or product needs to be considered, not only by its individual requirements; but also, those interdependencies that it shares with other component parts of the existing infrastructure.
Cloud service providers have made a concerted effort to ensure that connectivity is a simple thing. Anybody who has a reasonable understanding of infrastructure, code, automation and scripting can get things working with relative ease. That being said, networking isn’t dead and the lack of skills in understanding the best architecture from a networking perspective often leads to environments where ‘things just work’. The basic networking components (e.g. network containers, routers, gateways, load balancers and security
52 | March 2018
groups) offered by the major cloud service providers are functional but too simplistic. A strong network architecture is perhaps more important in cloud computing than it has been in more traditional environments, especially as there will likely be a ‘hybrid architecture’ in most organisations for quite a long stretch of time.
4. Security is just NSGs Network security groups can be considered the most basic of Layer 4 firewalls. They serve in the main to ‘shape’ the permissions for who, or what, can consume a service or application within a certain part of the cloud environment. They are however, not even remotely close to being an appropriate choice for a valid security device, yet they are often implemented as the only means of securing the infrastructure. With a number of services being exposed to the internet, and with cyber attacks on the increase, security in cloud is vital and NSGs do not cut it. They are too often used as an excuse to bypass the security posture within an organisation by falsely being represented as having dealt with security concerns. There are a wealth of security options within cloud and these need to be considered in line with the network architecture so as to provide the most appropriate protective measures for the business.
5. Everything is Cheaper Cloud has provided businesses with a fortune of commodity offerings. For example, businesses can now scale up their infrastructure on demand, paying for only what they use and only when they use it. There are no costs to be considered for hosting and long term contracts are limited or don’t exist, meaning no lengthy tie-ins.
“With a number of services being exposed to the internet, and with cyber attacks on the increase, security in cloud is vital and NSGs do not cut it.”
The governance and control over spending is often overlooked or lost altogether. Whereas a physical piece of infrastructure used to need spend control before any purchase order was raised, this can now be achieved with the click of a button. Rarely do the financial controllers have a good idea of what is being purchased and why. Negligence can creep into an organisation if resources are ‘being left with the lights on’ when, in fact, they are not being used or simply not needed. Cloud service providers do have tools that can assist in alerting administrators to this fact; but as the amount of resources grow, it can become
increasingly difficult to understand the landscape. The most overlooked of all is egress data costs. For every GB of data transferred out of a cloud service environment (sometimes even within the environment) there is a minor fee. These charges rapidly add up and are regularly overlooked, sometimes presumed to be a ‘necessary evil’ of using cloud services. Depending on the business requirements for an application or service, migration to the cloud may not be the right thing to do.
Conclusion The cloud journey is an important part of the IT strategy for any business. Everything the cloud promises should be considered as the ultimate endeavour. Enthusiasm should however be kerbed to ensure that accurate decisions are made at the right time and for reasons that benefit the business, ensuring maximum return can be attained for the effort. 6point6, 6point6.co.uk
March 2018 | 53
Data Centre News is a new digital news based title for data centre managers and IT professionals. In this rapidly evolving sector it’s vital that data centre professionals keep on top of the latest news, trends and solutions – from cooling to cloud computing, security to storage, DCN covers every aspect of the modern data centre. The next issue will include special features examining cabling and enclosures, cabinets and racks in the data centre environment. The issue will also feature the latest news stories from around the world plus high profile case studies and comment from industry experts. REGISTER NOW to have your free edition delivered straight to your inbox each month or read the latest edition online now at…