Issuu on Google+

Volume IV, Issue 5 2014


Networks: Taking a holistic view

FEATURES Load testing in modern data centres FEATURES Benefits of a converged infrastructure FEATURES Software-defined trends

NETCOMMS europe Volume IV Issue 5 2014


Flexi-Box compact pod box The XSpace UK Flexi-Box is a compact, outlet box often utilised in high density work area environments that require rapid deployment of cabling systems.

Supports Cat5e, Cat6 and Cat6a installations • 1 to 6 port knockouts accept industry standard LJ6C modules • Angled port presentation Part No. 791806 FROM

£6.59 In Stock for Next Day Delivery

Mounting Options Desk, cable tray, wall and underfloor

• Hinged lid with single quick-turn fastener • Easy to access pre and post deployment • Accepts 2x20mm or 2x32mm conduit, from either end • Offers rapid deployment of cabling systems • Ideal for high density work area environments • Typically deployed in buildings with a raised floor • Can be desk, cable tray, wall or underfloor mounted.

01480 415000


COMMENT/OPINION 10 An Installer Scheme That Works All accredited IT programs are not created equal 40 Network Labelling Is Now A Must-Do Why labelling cables and other kit is so essential

E Space Business Centre, 181 Wisbech Road, Littleport, Cambridge, CB6 1RA

CONVERGED INFRASTRUCTURE 12 Understanding The Internet Of Everything Why identity matters in the Internet of tomorrow 16 Could New Innovations Stunt Fibre’s Growth? How the networking future is not all about microwave

Tel 01353 865403

and fibre

36 Taking A Holistic View On Networks Converged networks need a holistic approach to meet their potential

Published under licence by: LGN Media, a subsidiary of The Lead Generation Network Ltd

SECURITY 20 Securing 3rd Party Access

Publisher & Managing Director: Ian Titchener Editor: Steve Gold Production Manager: Rachel Titchener Advertising Sales: Bob Handley Reprographics by Bold Creative

The challenge of security and easy access

D ATA C E N T R E S 18 Data Centre Information Protection Why data is king in the modern data centre 22 Why Aisle Containment Must Be Fire Aware Understanding data centre fire suppression technology 26 The Perils Of Ignoring Load Testing Why data centre load testing is now a `must have’ option 28 The Need For Power Flexibility It’s all in the power consumption figures... 32 The Cat 8 Cabling Revolution Will Cat 8 change the face of data centres?

CASE STUDY 14 The power of leading edge CCTV 24 Cutting The Costs Of WAN traffic

How a decentralised approach to CCTV can reap dividends


ISSN 2045-0583 This publication is protected by copyright © 2013 and accordingly must not be reproduced in any medium. All rights reserved.

Solving remote issues with technology

FIBRE 34 The Power Of Optical Field Testing Understanding the demand-driven networking world 38 Encircled flux and optical loss analysis Understanding the need for enhanced optical networking testing

REGULARS 5 Foreword 6 Industry News

The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur.

IP costs and bandwidth needs slashed at Produce World

30 The Next Generation Branch Office

Price: €50 | £35 Subscription rate: €200 | £140

Printed by MCR Print, 11 English Business Park, English Close, Hove, East Sussex  BN3 7ET Netcomms stories, news, know-how? Please submit to including high resolution (300dpi+ CMYK) images.

NETCOMMS europe Volume IV Issue 5 2014



When wireless meets wireline… Like many professionals, I rely on a VPN on my laptop to keep me in contact with my server back in the office. After spending some time in Germany during August - and refusing the hotel’s kind offer of WiFi at 19 euros a day - I took my shiny new 4G MiFi unit with me, loaded with a local SIM card that cost two euros a day for half a Gigabyte of data.

at pretty well the same speeds as from my desktop.

other mobile bandwidth-heavy devices is really starting to take off.

So what’s going on? Chatting with staff in a Hamburg O2 store, it seems that the German networks are throwing big money at making their 4G offerings as attractive as possible to consumer and business users.

And, whilst the myriad cellular base stations that one sees out of the car window when travelling along the motorway use microwave on a local hop basis to feed into the nearest network concentrator, those concentrators use fixed line resources to pump their data into the cellco’s networks and, of course, the highspeed Internet.

As well as giving me more money to spend on local ales, I was impressed to receive a solid 4G cellular signal in the hotel, with a packet latency of - wait for it - under 20 milliseconds.

The reason for this is that 4G is up to six times more efficient in terms of cellular resources than 3G services, meaning that cellcos can save money on several fronts, not least in terms of squeezing much more traffic into a cellular base station’s bandwidth.

That’s fast – far faster than a WiFi signal in most public access WiFi stations in the UK, let alone Germany - and meant that my VPN back to the office server was operating

This is actually a win-win-win situation for the cellcos, their users and the vendors that supply their wireless and wireline systems, as the take-up of 4G services in laptops, iPads and

May all of your network problems be little ones. Steve Gold Editor - Netcomms Europe

NETCOMMS europe Volume IV Issue 5 2014





HellermannTyton has launched LightGuide 2, a new fibre ducting product range. The ducting was developed to offer a complete ducting system with a wide range of modular components. Basic components include horizontal and vertical ducts, elbows, drops and a range of junctions. The fibre optic ducting system is designed to protect and route fibre optic cable in the data centre, central office, head end and PoP. Developed to ensure the total protection of fibre cable combined with ease of use, LightGuide 2 maintains a minimum 50mm bend radius throughout the system protecting against signal loss or damage to cables due to excessive cable bends.   Belden has added pre-terminated fibre connectivity products - FiberExpress - to its data centre solutions range. The new products include low-smoke, no-halogen (LSNH) pre-terminated fibre assemblies and an ultra high-density rack-mount fibre connectivity system. The company says that the FiberExpress ultra high density (UHD) rack-mount fibre connectivity system includes a wide range of pre-terminated cassettes and field-terminated frames to support virtually any high-density connectivity need - as well as offering cost-effective migration from current 10GbE to 40 and 100GbE applications. The system also makes it possible to combine fibre, copper and multimedia connectivity in mixed media environments – all in the same rack unit of space.

Mirantis claims to have quietly become one of the largest OpenStack provider for companies in the $1.6 trillion telecoms industry, Clients now include Ericsson, Orange, Huawei, AT&T and Tata Communications, all of whom are using the Mirantis OpenStack technology. 6

NETCOMMS europe Volume IV Issue 5 2014

Mirantis says it has been supporting customers in the telecommunications space since the inception of OpenStack more than four years ago, enabling these companies to attain the production qualities that people come to OpenStack cloud for today. Company CEO Adrian Ionel said the industry majors are turning to Mirantis OpenStack because they know that the clouds it enables are robust enough to handle their largest workloads inproduction with zero downtime. “The fact that we focus exclusively on OpenStack products and services means that we have developed an unmatched level of expertise across a broad range of standards and use cases, across the ecosystem,” he explained.

Nimans has unveiled a wireless video intercom system from Panasonic. The distributor says that the new systems will allow its resellers and partners to capture a bigger share of the home and small office surveillance market. The expandable easy-to-use system supports up to six wireless monitor stations and optional heat seeking security cameras. A wide-angle main door station camera is combined with SD card recording functionality, zoom capabilities on the master five-inch touch screen monitor and electronic gate opening support. Paul Burn, Nimans’ head of category sales at Nimans, said the flexible system provides around-the-home or small office support for convenient peace of mind security. “This is a growing area of the market that allows resellers new ways to capture additional revenue opportunities,” he explained. “The home and office automation sector continues to accelerate and this product fits perfectly in a dynamic and developing area of the comms arena.”

ISG says that its research suggests that EMEA outsourcing activity is hitting a record high, with outsourcing activity in the Europe, Middle East & Africa (EMEA) reaching a record high in the first half based on the volume of contract awards. The Q2 2014 EMEA ISG Outsourcing Index, which measures commercial outsourcing contracts with an annual contract value (ACV) of €4 million or more, found that first half ACV across the EMEA region totalled €5 billion, an increase of 32 per cent year-on-year. The number of contracts signed was up 25 per cent for the same period. David Howie, a partner with ISG, said that the EMEA continues to maintain its leading position in the global outsourcing market. “The region’s increased contract volume and value in the first half was driven by a rise in demand from continental Europe, most notably France and Germany. Looking ahead, we’re seeing a great deal of transaction activity in the market that should come to fruition in the second half of 2014. Taking the year as a whole we would expect ACV in the region to comfortably exceed 2013 levels,” he explained. The UK market maintained its steady performance, with ACV of €1.4 billion - an increase of six per cent compared with the first half of last year. This occurred despite a slight drop in contract counts to 83, from 92 the previous year, says ISG. For the quarter, EMEA ACV remained flat compared with the strong first quarter of 2014 but finished 50 per cent ahead year-on-year. The region benefited from a steady IT outsourcing performance. Though modestly down sequentially, this was the strongest second quarter ever for ITO award value (€2 billion) and counts (111) in EMEA.

Network Rack

Reliable, safe, efficient, dependable, sustainable


Order before 5pm for


UK engineered – UK designed – UK manufactured – UK next day delivery* *Order before 5pm for free next delivery of stocked racks (UK Mainland only)

Value without compromise The class leading 800kg static load Standard and Active Network Racks from B-Line offer real value, without compromising on product quality. Designed, engineered and manufactured in the UK, they offer a practical yet highly versatile solution for housing common infrastructure.

MASTER DISTRIBUTOR or call 01480 415000 NETCOMMS europe Volume IV Issue 5 2014



How the software-defined trend is changing our industry xxxxxxx

Riding The SDx Wave By Dave Le Clair, Senior Director of Strategy, Stratus Technologies


Dave Le Clair discusses the trend of softwaredefined architecture

The term `software-defined’ is becoming a familiar one. Everywhere we look in IT today there is a discussion about how software-defined architectures are redefining the world around us. From software-defined data centres to software-defined networks and software-defined storage, `SD anything’ - otherwise known as `SDx’ is a hot topic. The concept behind SDx is simple. By separating the underlying hardware from the software that manages the infrastructure using an abstraction layer, SDx allows for fast provisioning, configuration and teardown of new, agile infrastructures. As a result, IT departments will be freed from many of the operational inefficiencies and manual tasks associated with traditional hardwarebased approaches. Some CIOs question the true value of implementing SDx infrastructures. However, those who are making the move are seeing the benefits – they are becoming more agile, completing projects more quickly and innovating faster – all without the need to increase head-count or costs. At Stratus we see the move towards SDx as a positive trend, freeing resources up to focus on tasks that deliver real business value, rather than just managing existing applications and doing basic IT tasks.


Fast-moving software based solutions might help businesses capitalise on opportunity, but without reliability it

Stratus everRun: making light work of SDx...


NETCOMMS europe Volume IV Issue 5 2014

may become a futile exercise. While concepts for software-defined storage and networks continue to be developed, we are seeing exciting new solutions in the availability space start to emerge. Hardware-based, fault-tolerant servers and software clustering tools have historically been the de facto standard used to protect business-critical workloads from downtime. As softwarebased availability solutions have matured to deliver mainframe-like availability levels that can compete with hardwarebased fault-tolerant servers (running on commodity servers), this has given way to a new breed of availability solutions. Software-defined availability moves downtime prevention and recovery decisions outside the application layer to the underlying software layer. And unlike traditional hardware-based availability solutions uptime is not dependent on a specific set of hardened servers – it is in effect abstracted from the application and from the hardware.

In the cloud

Cloud infrastructures are typically designed for scale and built with lowcost commodity components, which means failure is an ever-present reality. To keep applications up and running in the cloud, they need to be designed to work around potential failures. For new applications that are designed specifically for the cloud, this can be addressed by writing availability into the application itself. But not all IT organisations are equipped to take on this highly skilled exercise – it can be time consuming, expensive and still not result in a solution that avoids downtime at all costs. And what does this mean for business critical legacy applications that require fault-tolerant levels of availability? These applications have traditionally been built under the assumption that the infrastructure is reliable with availability built in at the hardware layer. To achieve the same stringent levels of availability in a cloud environment today would mean rewriting applications that have been tried and tested over years. Software Defined Availability when applied to cloud environments leverages

the inherent flexibility of the cloud to deliver the right levels of availability at the right time, on a per workload basis. For both native cloud and legacy applications, the abstraction of availability from the hardware or application layer will enable IT and line of business executives to change the level of availability based on their current application requirements. This is extremely useful for applications that are considered to be business critical for some of the time, but not all of the time.


It is clear that the same hardware and software architectures and infrastructures that have been used for years can no longer meet the demands of the modern enterprise data centre. To survive in this new world, CIOs must rethink how to deal with this shift in infrastructure requirements. Not all of the SDx technologies or the business processes in the enterprises deploying them are mature or even clearly defined. However, the trend towards widespread adoption of SDx is very real and there is a clear vision of where we need to get to in this journey. Ultimately, we expect to see most new IT projects deployed into large data centres using SDx solutions for efficiency and agility. There is still, however, work to do on several fronts to standardise the solutions that will win in this market and to create a secure and reliable environment where complex business critical applications can reside.


All accredited IT programs are not created equal

An Installer Scheme That Works By Mike Gilmore, DIRECTOR standards@fia


Mike Gilmore sings the praises of a new installer scheme...

The IT world has repeatedly tried to develop an effective approved installer scheme. And it would appear that the Fibreoptic Industry Association has finally managed to create a scheme, which has attracted a substantial percentage of its members to register as accredited installers, has obtained supply chain endorsement and is now being referenced in contract documentation. This article summarises the requirements of the scheme and the opportunities offered by it - following the first anniversary of its operation in June 2014. For as long as I can remember, trade associations have initiated, advertised and funded schemes that have “withered on the vine.” The Fibreoptic Industry Association had produced at least two versions of its approved installer scheme, which failed to attain critical mass, and were therefore of no value to clients with the result that most approved installers failed to renew their approval in subsequent years. By comparison, the FIA accredited installer scheme - initiated in early 2013 and fully operational by June of that year - has been incredibly successful. Almost 25 per cent of the 103 FIA members who quote `Installation’ as their primary activity are now registered under the scheme. Equally importantly, all those registered for the first year have retained their accreditation for the second year and the number of accredited installers continues to grow on a monthly basis. So what has introduced this different level of impetus to this scheme where so many others have failed? It is true that supply chain endorsement has helped. Alan Bullen, the responsible FIA director, has worked hard to obtain that endorsement and the recognition of the scheme in client’s contract documentation is also very much a reflection of his hard work. That being said, there is a huge amount of chicken-and-egg involved i.e. installers are not going to join a scheme that clients don’t recognise and client aren’t going to recognise a scheme with only a small number of accredited companies.

10 NETCOMMS europe Volume IV Issue 5 2014

Something else

So there has to be something else that encouraged the initial uptake of the scheme… and that has been the demand for low cost “continuing professional development” as part of the obligation of accredited installers to commit to regular attendance at quarterly seminars provided at low cost, essentially free-of-charge, by the FIA. Each quarter, the FIA provides two seminars addressing different aspects of installation quality assurance and technical expertise. By attending these seminars, accredited installers obtain a drip-feed of good practice, they learn to navigate around the FIA web-site to obtain the correct technical documentation related to specific aspects of installation and… most importantly, they have a direct interaction with FIA directors in installation, technical and standard development roles. That interaction empowers the installers to ask questions and highlight areas, which may require further explanation in the form of FIA white papers and similar documents. Accredited installers also have their own private area of the FIA Web site where any documents created in response to their needs are stored. In the coming year the FIA Installation Directorate is developing its external installations content including its all-important inter-relationship with the civilian world. This will extend the type of information we can provide at the AIS seminars. The AIS is part of the FIA Risk Reduction Programme umbrella under which the FIA believes that reduction of risk is the prime quality assurance driver for installations of cabling - minimising the probability of complaints and litigation - because if installers minimise the levels of risk to themselves then they also minimise the risk to their customers and vice versa. The obligated commitment of the accredited installer to increase their skill-sets and quality assurance controls should be reflected in a reduction in risk to their clients. In response, the FIA Arbitration Scheme is offered to accredited installers and their clients free of charge and

without prejudice in situations where one or the other is unhappy with an installation outcome. This offers both parties a cost-effective way of obtaining a resolution without the need to `get legal’ with all the costs that such a process involves. The FIA AIS provides a good model for `non-fibre’ installations but unfortunately there is no effective industry association addressing that part of the market.


As a result, and in recognition of the fact that most installers undertake IT installations of both copper and optical media, the FIA attempts to provide information that is general in scope, particular in relation to installation quality assurance. So being an accredited installer under the FIA scheme is also a good predictor of their commitment to installation skills and procedures in other technology areas. If you want to know more go to and learn about the scheme both from the installers viewpoint and also that of the client. A current list of accredited installers is available at eais01-05.htm.


Switch to the future

FTTO Active & Passive Solutions Nexans is pleased to announce LANactive, an alternative approach to structured cabling. Using fibre-to-the-office (FTTO) topology together with access switches installed near to the work place, it provides Ethernet services via standard copper based RJ45 technology to the device.

• Long distance transmission • Elimination of costly floor distribution • Reduced cable containment • Refurbishment with minimum disruption

The approach offers significant cost savings and other benefits in specific circumstances:

• Redundancy at user level

Global expert in cables and cabling systems


Why identity matters in the Internet of tomorrow

Understanding The Internet Of Every By Geoff Webb, Senior Director - Solution Strategy, NetIQ


Geoff Webb discusses the growing link between identity and The Internet of Things...

The increasing complexity of technology has brought about a variety of issues for security organisations. As a reaction to trends such as BYOD and cloud, companies are renewing their focus on keeping data safe. In order to protect data in this new landscape, IT departments are now focusing on two key questions: Are you who you say you are? And are you doing what you’re supposed to? Security professionals are spending an increasing amount of time considering how to define a user. It’s vital that an organisation understands exactly who a user is and what they do, as without the knowledge of what constitutes normal behaviour data is at serious risk from external parties as well as insiders. Understanding where identity fits into IT security is becoming even more problematic as the ‘Internet of Things’ (IoT) continues to evolve. A recent report by HP found that devices designed for the Internet of Things are full of inconsistencies when it comes to security. While organisations still have time to adapt before the IoT age is truly upon us, they need to start conducting reviews and implementing standards before the explosion of connected devices makes this an impossible task.

potential of the IoT. This enables us to engage with one another, personally and professionally, in completely new and personalised ways while ensuring our data is protected. In the past, the term ‘identity’ has been used to uniquely define either a thing or a person. Nowadays, identity can be better explained by looking at multiple elements, like contextual clues, including our previous behaviour or interaction with others, as well as our interactions with third parties. Broadening the definition of identity to encompass these ideas will play a key part in managing the variety of IoT devices and is absolutely critical to keeping us and our devices secure.

Everyday Internet

Earlier this year, it was reported that numerous connected domestic devices – including fridges and light bulbs – had been hacked. Following these reports, the IoT started to gain widespread traction in the media for the first time and is now perceived by the majority as an important part of our future. In fact, the IoT is already a very real thing in many places. Some devices are able to monitor themselves in such a way that if

something were to break, or if the device knew it was time for a regular service, it could automatically schedule maintenance. Another area where the IoT continues to develop is healthcare, with medical devices in constant communication with each other to monitor patients and alert doctors should something serious occur. The example of self-monitoring components is interesting when considered as small parts of a whole – of a large overhead crane, for example. Each component can be given its own identity and individually tracked, right through from the manufacturing phase to the point where it needs to be replaced. Its lifespan can be improved and downtime reduced, especially significant for an industry such as manufacturing where efficiency is so critical. In order to make this happen, each device requires its own individual identity – an identity that must be assigned and managed. And therein lies the challenge at the centre of the Internet of Things. As the IoT becomes a part of our daily lives, it’s important that we look to develop an ‘Identity of Everything’ alongside the Internet of Things to cope with the management of these multiple identities.

Connectivity growth

Identity is inevitably going to be a challenge as the IoT adds to our complex IT landscape, and as HP’s report demonstrates, it is about to get much, much harder. The amount of connected devices is estimated to reach one trillion by 2020 and the extent of the impact this will have on our daily lives is almost unimaginable. This new world of the Internet of Things will change everything from the way we work to the way we play. IT departments need to consider how best to manage and secure these devices to make the most of this trend, balancing the needs of productivity, innovation, and security. Arguably, the first step in finding a solution lies in how we define the very idea of identity, because identity lies at the heart of unlocking the real 12 NETCOMMS europe Volume IV Issue 5 2014

IoE: Connecting multiple devices will be the norm...

ything A new norm for business

As the connections between devices and people continue to grow, understanding each unique element will be critical in ensuring that the IoT is a safe, secure environment – both for machines and for users. For organisations looking to make the most of these new technologies, there will be an excess of new commercial opportunities. Buying behaviour, product preferences, even entire markets, can be understood more clearly, creating new business models and ways of engaging with customers, but this is only possible if all pieces of the identity puzzle are put into context. Fundamentally, companies will be able to relate to their customers on a far

deeper level. As an example, your car will know when you’re ten minutes from home and let your smart thermostat know to turn the heating on in time for your arrival. Your fridge will know when food is running low and order your favourite items to be delivered on your return so you’ll never run out of milk or eggs again.

Endless possibilities

Alongside these limitless possibilities, there are also inescapable complexities that need to be addressed such as finding the balance between security and access. As the volume of connected devices continues to increase, their interactions could become overwhelming. We need

Get a better view of your fiber

to understand what interactions are ‘normal’ to identify what’s abnormal – and potentially malicious – behaviour. As more devices start to communicate with each other, understanding their unique identities and how they interact with one another will be of the utmost importance to ensure our information stays safe. Not only that, it is knowing what constitutes ‘normal’ behaviour for these devices that will be most integral to ensuring the Internet of Things remains a secure environment, full of opportunity.

Fiber Visualizer - get a graphical summary of all your fibers faults The NEW Fiber Visualizer simplifies the entire fiber testing process. Automatically setting the correct test parameters for your fiber, it quickly and easily displays a self explanatory graphical summary of the fiber under test. Instantly highlighting any problems with the location and severity. A pdf report can then be generated to complete the test process. Available now on the uOTDR & MT9083 Access Master series.

• Test up to four wavelengths with a single unit • 7 inch widescreen TFT-LCD, ready to test in 15 seconds • Test ultra-long fibers >200 km and rapid PON testing up to 128 splits • NOW with larger screen, longer battery life and only 2.6 kg • One button pdf report generation

Scan the code to find out more and get your FREE Guide to Understanding OTDRs

w w w. a n r i t s u . c o m

Europe +44 (0) 1582-433433 ©2014 Anritsu Company

NETCOMMS europe Volume IV Issue 5 2014 13


How a decentralised approach to CCTV can reap dividends

The power of leading edge CCTV Introduction

How a growing manufacturing business benefited from state of the art CCTV security at its new site...

Disposables UK Group is a familyowned business, operating for 26 years, based in Meltham, near Huddersfield in the UK. The company manufactures and distributes paper disposables and cleaning and hygiene products for the ‘away from home’ market. The company was the first UK paper converter in the away from home market to gain EU Ecolabel certification on its flagship Bay West range of washroom dispensers and paper products. The Bay West products were also chosen for supply in certain areas of the 2012 Olympic and Paralympic Games.

Five year plan

In July of 2013, the Yorkshire based Group took a step nearer to achieving its ambitious five year growth plan with a move to a larger site in Meltham, near Huddersfield. The business, which is one of the largest employers in Meltham, has invested £1.7m in new machinery as part of a strategy to double its current £16m turnover to £30m by 2018. Prior to the move, the company spread its manufacturing, distribution and administration across four separate sites. As part of this consolidation process, Rob Saunders, engineering and QHSE manager for Disposables UK

CCTV: Monitoring the warehouse 24/7...

14 NETCOMMS europe Volume IV Issue 5 2014

Group began evaluating the options for a new CCTV system to protect the enlarged combined site. As Saunders explains: “With the sizeable investment we had made in the new facility, it was essential that our security systems should be comprehensive and highly integrated across CCTV and access control. As we were putting in a fresh system, it allowed us to build a security platform that was both fit for purpose on day one while allowing us to grow as and when needed.” The site includes a 7,500 sq ft modern office facility including a state of the art reception, offices, training room, and product display areas.

24 hours a day

As a site requiring access up to 24 hours a day, Saunders needed a CCTV system which would remain constantly vigilant but not constantly flooding the network and storage servers with mostly static video content. To provide expert support for the project, Saunders contacted AML CCTV for design, implementation and maintenance services around the access control, monitoring and security systems needed to secure the new site. Based on the requirements for efficient network usage, high-resolution video and reliable operations - AML recommended a MOBOTIX based solution. “The demonstrations we had on the MOBOTIX CCTV and the impressive reference sites ticked all the boxes for us,” explains Saunders. Luke Holland and Chris Philpot, two senior engineers from AML worked closely with the Disposables UK technical team across an intensive installation. “AML had the technical knowledge, flexibility and level of professionalism that we needed to get the job done in what was a challenging four month project,” explains Saunders, “it is fair to say, that without their expertise, we simply would not have hit our deadline.” Monitoring the entire facility including full external coverage required just 28 MOBOTIX cameras running off a dedicated fibre backbone network. The relatively low number of cameras

required to monitor the site was due to the high-resolution imagery offered by MOBOTIX and hemispheric technology, which allowed a wider area to be monitored with a single camera.

Complete contrast

This is in complete contrast to most other CCTV systems, where the camera typically has no real intelligence and relies on decision-making and image processing taking place at ’the core‘ of the network via centralised software or DVR. As the camera can store video within the device and only needs to send video to a central repository at the discretion of the operator, Disposables UK did not need complex and expensive control room servers and network video recorders. In addition, AML has integrated a Paxton door entry system and electronically controlled access gates that are all fully covered by MOBOTIX cameras using MxControlCentre monitoring and incident indexing software. “One of the key advantages of MOBOTIX for us is that we can use mobile devices such as our iOS tablets to view images from any camera while both on or off site,” explains Saunders. “Not only do we use the system for 24 hour a day security monitoring but it also allows us to monitor our manufacturing and logistics processes in real time.” “The MxControlCentre software is incredibly easy to use and because the systems only records on movement, we need only a minimal amount of disk storage and the additional built-in camera storage means that the system will keep recording even in the event of a network problem,” says Saunders. Since its implementation, the system has performed flawlessly and Saunders commends the team from AML for its professionalism.


How the networking future is not all about microwave and fibre

Could New Innovations Stunt Fibre’s By Frank Kaufhold, Managing Director, UTEL


Frank Kaufhold explains why there is life left in copper technology...

With the recent growth in smartphone and tablet users, alongside the development of hundreds of thousands of applications, consumers around the globe are using and expecting availability and access to more and more mobile data. Indeed, this market is growing at such a rapid rate that, according to a 2013 Cisco report, by the end of 2014 the number of mobile connected devices will exceed the number of people on earth. That same report predicted that by 2018 there will be nearly 1.4 mobile devices per capita. Meanwhile, global mobile data volumes have nearly doubled every year. For mobile and fixed operators alike, the data leads to just one conclusion – now is the time for a 4G infrastructure to be put in place. The introduction of 4G presents both opportunities and challenges in equal measure; with one of the main obstacles for operators being the problem of providing backhaul to individual cells. This is nothing new. Backhaul is already a challenging issue in the 3G world and 4G deployments only serve to amplify this.

After only just overcoming the demand for 3G, a world of 4G subscriptions is already in motion, with the US, Japan and the majority of Europe already a part of the revolution. Add to the mix the era of the smart home which is also on the horizon and data volumes can only continue to increase, with customers expecting - and demanding - an even more reliable network. The more those customers rely on these smart technologies, the more they will notice when the service is disrupted, making it imperative that the challenge backhaul brings to the table is overcome.

Finding a solution

The question, then, is how to do this? The telecoms industry seems to be primarily focused on fibre as the answer and, on face value at least, it is certainly an attractive option. Fibre can provide backhaul for all mobile data networks and boasts the ability to provide almost unlimited data throughput. More generally, fibre is also immune to many environmental factors that affect copper cable, most notably water ingress that

has been a big factor recently. The core is made of glass, an insulator, so no electric current can flow through, eliminating electrometric interference and radio-frequency interference (EM/RFI), crosstalk, impedance problems, and more. Fibre cable can also be run next to industrial equipment without worry, is less susceptible to temperature fluctuations and can be submerged in water. There are, however, also downsides to fibre. The first, and probably most important, drawback is the huge expense associated with deploying fibre. Moving away from the initial cost, while more durable than copper, fibre is vulnerable to faults and these tend to be caused by human error, for example poor practice, lack of training or carelessness. There’s also a risk with cables laid on the street or overhead that other utilities can come across them accidentally and unwittingly damage them. It is in fixing these faults that fibre presents another challenge in that so much has already been achieved with fixing faults on copper lines, with centralised testing systems allowing operators to reduce

UTEL’s testing rig - checking the innovations...

16 NETCOMMS europe Volume IV Issue 5 2014


s Growth? staff and costs by around 50 per cent. The trick is to replicate the success achieved with copper for fibre and our GPON (Gigabit-capable Passive Optical Networks) fibre management system has managed to do that but is not yet widely deployed enough to realise its capabilities of eliminating the conventional manual fault finding processes along with the skilled technicians, the truck roles and the costly handheld OTDRs (Optical TimeDomain Reflectometers) that were previously required to fault PONs. The lack of deployment of a centralised testing system could stem from the fact that fibre still has fairly low penetration, suggesting that while this technology has huge potential in the future, there is still work yet to be done for that potential to be realised.

Microwave options

Moving away from fibre, another solution currently being explored by the telecoms industry to meet growing demand is high-bandwidth microwave solutions. This technology offers irrefutable speed advantages, mainly because of its ability to travel point-topoint enabling the data to travel the shortest distance possible. Like fibre, however, microwave solutions are expensive to deploy and the point-to-point capability means it is also susceptible to signal interference from obstructions, which interrupt the line, the data is taking. This makes it unsuitable for many urban and suburban areas, as well as hilly or mountainous regions. In addition to the potential for signal disruption, there is also a finite number of microwave towers across Europe and a finite number of dishes that can be fitted on to these towers. Inevitably this will lead to a race among providers to gain access to these towers meaning that microwave access is likely to become restricted to the early adopters. As a result of the drawbacks of fibre and microwave solutions, innovative technology vendors are beginning to look at tried, tested and reliable copper solutions more and more in a bid to provide an interim solution to meet growing demand while fibre

infrastructure roll out is completed. While the reasons operators had begun to move away from copper – mainly the inability to handle the huge amounts of data required for Internet and television – still exist, so too do the advantages, including the fact they use PoE (Power over Ethernet), protecting them from power cuts; have less expensive electronics and are more flexible. Furthermore, copper lines still make up the majority of home connections so utilising them would provide a cost-effective and economically justified outcome.

The old and the new

Cost-effective and economically justified is, of course, good news for operators but what about the operator’s customers? All that matters to them is being able to get the services and highspeed connection from their home devices at the touch of a button with no complications and at a low cost. Whether the connection and services are delivered over fibre, microwave or copper is not a factor in their choice of network provider. So, how can operators overcome the disadvantages of copper and eliminate the cons that stop it delivering the sort of service customers want? Success has already been seen with solutions, such as, which are widely available to boost the speed of copper lines and reduce the cost for end users but now, thanks to some innovative technology vendors and solution providers, enhancement of existing copper lines has now been taken a step further. One of the main limitations of a copper infrastructure is the requirement for interconnect technology to be manually managed, resulting in higher costs and an increased risk of human error. For operators choosing to adopt new high-speed copper solution technology for backhaul infrastructure this is usually a huge deterrent but a new solution could prove to be coppers secret weapon. The new solution works to make the interconnection automated; thereby removing the drawbacks of high costs

and risk of human error and allowing operators to take back control of network management. A fully managed system like this will enable operators to easily provide new and different services to customers on an almost on-demand basis – the first step in achieving one of the operator’s main goals of attracting and retaining happy and loyal customers and something which is crucial in this fast-paced digital age.

Final chapter?

Far from being a pipedream, work is already underway to enable the vision of this fully managed system at an affordable cost for operators. So, does this signal the beginning of the end for fibre? While fibre certainly heralds many advantages over copper and, in years to come, is likely to be the main source of connectivity, for now the huge expense associated with deployment does present a problem. An automated interconnection system, therefore, could see operators decide to delay their investment in fibre for at least a decade, as long as the copper pipes are serviceable and functioning without performance compromise. In rural areas, where fibre is less likely to be installed due to the low return on investment, copper becomes an even more attractive option. On the other hand, more and more is being invested in fibre, particularly in urban areas where there is more demand for higher speeds and the promise of a higher return on investment. In areas such as these, fibre will most likely always be the technology of choice due to its unrivalled capacity to handle the huge amounts of data it is required to cope with. When predicting the technology of the future, then, fibre is undoubtedly a solution that will feature high on the list. The question, then, is will a fully managed copper system give copper one last hurrah and stunt the progress in delivering a 21st century Europe-wide fibre network?

NETCOMMS europe Volume IV Issue 5 2014 17


Why data is king in the modern data centre

Data Centre Information Protection By Sheldon D’Paiva, Product and Solutions Marketing, Nimble Storage

Storage silos

Sheldon D’Paiva explains how the data centre ecosphere is evolving...

Storage silos have long ruled the traditional data centre. Businesses have been purchasing separate primary and secondary storage systems – and for good reason. The end users of business critical applications such as email and databases demand the performance that primary storage systems deliver. When backing up those applications however, the backup data does not need to be accessed in real-time, and the decision point often shifts back to cost, at which secondary storage systems excel. But there is an unrelenting tide eroding those dividing walls between silos. Nimble Storage conducted a survey on data protection with 1,600 participants, which found the majority of enterprises believe they cannot afford to lose more than 6 hours-worth of data (RPO), and that they must be able to recover protected data in less than 6 hours (RTO). Using separate primary and secondary storage systems mandates that data be read from the primary system, moved across the network, and then written to the secondary storage system – making 6-hour RPOs and RTOs all but impossible. Meeting strict data protection requirements at scale requires a different approach – one that doesn’t require the traditional read-move-write methodology that impacts production infrastructure.

High availability

To deliver high data availability, a storage system must be built on a fault tolerant architecture. A fault tolerant architecture means that the system should be designed to tolerate failures at multiple levels, and is essential in order to deliver “five nines” availability – or system availability of 99.999 per cent. Failures at the component level must be detected and corrected. For example, RAID technologies use redundancy to recover from drive failures at a component level. A system should also be designed to be fault tolerant at a sub-system level to eliminate a single point of failure. Enterprise-class storage systems are typically designed with redundant controllers that can take over in the event of a failure. When it comes to data protection, storage snapshots provide recovery point objectives (RPOs) and recovery time objectives (RTOs) unmatched by traditional methods that have a read-move-write impact on production infrastructure. Storage snapshots are essentially a point-in-time version of the data on disk. For effective use in aggressive data protection scenarios, they need to be very efficient – with a well thought out implementation and metadata (pointers to the data) layer, so the data itself is not moved or copied every time a snapshot is taken, and only the changed blocks are stored. For very aggressive data protection needs, the storage system must be able to take snapshots every 15 minutes (15 min RPOs) and also be able to retain those snapshots cost-effectively.

Leveraging SSDs

Tomorrow’s data centres: where’s the data?...

18 NETCOMMS europe Volume IV Issue 5 2014

Leveraging SSDs for snapshot metadata and high-density disk for snapshot data enables a storage system to take frequent snapshots without impacting the performance of critical workloads, and also store the snapshot data costeffectively to address the majority of data recovery cases. Finally, storage snapshots can also be used to recover data even in the event of a complete array failure or site outage, by replicating the snapshots to

another storage system – which is often at a remote site for disaster recovery purposes. A fault tolerant architecture is the baseline for high data availability, and an efficient storage snapshot implementation can deliver on the service level agreements required for aggressive data protection, but data analytics can contribute to pro-actively ensuring the overall wellness of the storage environment. In fact, data analytics can improve data availability beyond the “five nines” level. In order to do this though, data analytics need to be well integrated within the storage system. The storage environment must also be monitored for various metric values that number in the millions for a single system - from physical metrics such as fan speeds and power supplies to data services metrics such as volume sizes and performance levels. Although the storage system needs to be able to collect and track a vast number of metric values, processing them all on the storage system itself would put an additional load on the system that could impact performance. A cloud-deployed service can collect the analytics telemetry data from the storage system and then processes it in the cloud to offload the heavy lifting. It also removes the need to deploy additional infrastructure onsite, while enabling the ability to monitor multiple systems from anywhere to ensure data availability and data protection compliance. A fault tolerant architecture, modern storage snapshots, and powerful data analytics form the basis for a holistic approach to backup and data availability, resulting in storage systems that operate in peak condition at all times. These three key underpinnings can provide both high data availability, as well as an effective data protection platform for aggressive requirements.

THREE PHASE POWER Three Phase Power Designed to bring maximum power to your servers, the G4 three phase range are built to exacting standards to ensure maximum safety for your facility.

Available with: • C13 C19 Locking outlets • C13 C19 Fused outlets • BS1363 UK outlets • Continental outlets • Individual circuit protection per outlet • Overall metering of V, A, kWh, Harmonics, PF.

G4 MPS Limited Unit 15 & 16 Orchard Farm Business Park, Barcham Road, Soham, Cambs. CB7 5TU T. +44 (0)1353 723248 F. +44 (0)1353 723941 E.

Vertical rack Mount

Maximise you rack space, specify mixed connector PDU’s built to your exact requirements to give you just the solution you are looking for.

Horizontal rack Mount

Thermal overload protection or fused outlets mean that you only loose a single socket in the event of a fault, not the whole PDU thereby removing the risk of a total rack failure.


The challenge of security and easy access

Securing 3rd Party Access By Stuart Facey, VP International, Bomgar


Stuart Facey discusses the challenge of thirdparty access security issues...

Donald Rumsfeld provided a famous quote regarding problems of which we are aware: “There are known unknowns. That is to say there are things that we now know we don’t know. But there are also unknown unknowns. There are things we do not know and cannot know. So … we do the best we can and we pull all this information together.” In IT security, there are constant challenges, such as hackers, insider threats and malware. However, these fall into the bucket of ‘known unknowns,’ in that you should be aware of the general risks and have taken steps to prevent them. Another ‘known unknown’ that often doesn’t get the attention that it should is third party access. For many reasons, companies rely on external suppliers to provide them with IT services and support, which requires these vendors to access a company’s systems, typically from a remote location. According to research by analyst firm Ovum, in Western Europe 88 per cent of companies allowed remote access to their networks by outside suppliers. While the majority of those surveyed had only a handful of companies with access, companies with tens or hundreds of outsourcing partners were also found.


These third parties require access to your IT network in order to provide that

support. However, many IT teams don’t have any insight into what their third party suppliers are doing once they are inside the network. So while they may know which third party organisations have access, exactly who is accessing their systems, when and what they’re doing is commonly unknown. Being able to manage, track and audit access by third parties should be viewed as critical to maintain security and compliance. As many have said before, you can’t `outsource’ security, so you can’t rely on your vendors to keep your network secure for you. The biggest example of this so far has been US retailer Target – the company had more than 40 million credit card records stolen, and the initial attack came through a third party supplier. With this in mind, there are some simple steps that companies can take in order to improve third party access security: 1. Carry out an audit The key question to ask is how many third party relationships you have, and whether any of these companies require access to your IT systems remotely. Following the principle of least privilege, give these companies access to only the systems on which they need to work, and consider limiting what days and times they have access, further deterring unsanctioned behaviour.

2. Consider your remote access tools Once you have a list of vendors with valid remote access requirements, it’s important to look at how those vendors are currently interacting with your network. Are they using their own remote access tool of choice, or one that you own? How does it gain access to your network and where are the audit logs stored? When possible, require all third parties to use a consolidated remote access tool you own or manage so you’re in complete control and have a comprehensive record of who accesses your systems and when. Many remote access tools rely on open ports on the firewall, specifically the approach that free remote access tools use in order to connect across the Internet. While they might be useful for small business or personal use, open firewall ports like RDP port 3389 are favourites for hackers and can be dangerous to run. If you do decide to use tools like this, be aware of the risks and how to mitigate them. 3. Review the audit trail Regularly review what your vendors are doing while they are in your systems and set up alerts for any unexpected activity, such as a vendor logging in outside normal working hours.


Overall, third party access can be a big challenge to consider, especially when it comes to the role-based access and compliance requirements that are contained in regulations like the PCI card payments framework for data security. As more outsourcing takes place, companies will have to consider their approach to managing security and remote access. To get the most from these relationships, security and vendor access does have to be considered from the start. Donald Rumsfeld: admitting the unknown...

20 NETCOMMS europe Volume IV Issue 5 2014

We take care of your data centre so you can take care of your business.

Increase uptime and lower total cost of ownership.

Exceed performance goals with standardised processes.

Operation Services simplify data centre maintenance, helping you reduce OpEx, optimise energy use and minimise downtime. Who runs your data centre? In an environment where human error can be a significant cause of downtime, effective operations and maintenance need to be implemented by welltrained specialists who have superior technical expertise. That’s what we offer with Schneider Electric™ Operation Services. Whether you require Vendor Management, Managed Maintenance, or complete Facility Operations, our services ensure your data center gets the customised care that allows you to focus on your core business.

Optimise operations and save money. Our approach uses standidised best practices and automation tools that have been developed over 15 years of managing mission-critical facilities worldwide. This proven methodology keeps your data centre consistently operating at an optimum level. With steamlined maintenance activities and emergency support, you can significantly increase performance, reliability, efficiency and safety throughout your data entre’s life cycle. Operation Services make business sense. You save money by reducing operating expenses, avoiding costly downtime and minimising unplanned costs related to service interruptions and equipment repairs. Take the complexity out of day-to-day operations and contact Schneider Electric today.

Discover our full range of Data Centre Life Cycle Services: > Assess

Understand your performance and address technical and business challenges.

> Plan

Determine optimal performance, timing, regulatory compliance or sustainability.

> Design

Access an extensive design library to meet performance and safety needs.

> Build

Finish on time and on budget with commissioning, start-up and on-site project management.

> Operate

Simplify operations and minimise downtime with basic to advanced service offerings.

Business-wise, Future-drivenTM

Learn the top 10 mistakes to avoid in data centre operations! Download our FREE white paper and enter to WIN a Samsung Galaxy Note® III Visit: Keycode: 50050P ©2014 Schneider Electric. All Rights Reserved. Schneider Electric and Business-wise, Future-driven are trademarks owned by Schneider Electric Industries SAS or its affiliated companies. All other trademarks are the property of their respective owners. iPad is a registered trademark of Apple Inc. • 998-1229694-GB


Understanding data centre fire suppression technology

Why Aisle Containment Must Be Fire By Jeremy Hartley, Managing Director, Databarracks

The changing data centre

Jeremy Hartley explains how to make data centres fire adaptable...

Over the last decade, data centres have undergone a revolution driven by an ever-increasing demand for more compute power and storage. To meet this demand, IT has brought in highly dense computing such as blade servers and increased the use of virtualisation. Increased utilisation of hardware generates greater power loads, which in turn create heat and cooling issues. For many centres, this change in power, cooling and utilisation has come at a significant cost. Open plan halls full of racks have made it difficult to manage the hot spots created by blade servers, dense switch environments and large data storage systems. To create environments where power and cooling can be more effective has meant either extensive refurbishment to create smaller halls or the introduction of hot and cold aisle containment systems.


The introduction of aisle containment has given data centres - new and old

- the ability to prevent hot and cold air mixing inside the aisle. Preventing mixed airflow means that cooling can be more effective and delivered at a lower cost. Aisle containment can be either designed from scratch or retrofitted to existing environments and there are three types of aisle containment, hot, cold or complete. Hot aisle containment encloses the hot aisle drawing exhaust air away from equipment and sent directly to the cooling system via a ceiling plenum. While this increases the temperature of the hot aisle, it does prevent other equipment ingesting already warm air. Cold air containment enables higher utilisation of space as air can be directed to the equipment that needs it the most. By preventing hot air being pulled into the aisle, it ensures that input temperatures can be held constant. Service Level Agreements (SLAs) over input air temperature are starting to become more common and cold air containment is a key technology to meeting demand. When cold air containment and in-row cooling are combined, rack densities can be


substantially increased, providing there is enough power to support the additional equipment.  When the cooling efficiency of both cold and hot aisle containment are compared to rooms with no aisle containment, they use between 40 and 50 per cent less power to dissipate heat than a non-contained environment. For data centre owners, this delivers both a substantial reduction in the cost of running IT equipment and the ability to increase density without increasing cooling capacity.


The recent storms in the UK and US have shown that high winds causing physical damage and the increased risk of flooding are now a yearly risk and not a once in a lifetime risk. This means that data centre facility teams need to overhaul their entire risk profile for the facility. Part of that risk assessment process must include fire suppression systems that are already reviewed annually. For those with large halls and no


RiMatrix S: The first mass-produced data centre. Simply plug in and it’s ready to use.

22 NETCOMMS europe Volume IV Issue 5 2014

e Aware containment, this is perfectly fine. However, the rise of containment systems has created a lot of press, often misguided, about the dangers that increased aisle containment can bring to detecting and extinguishing fires. One of the big charges made against aisle containment systems is that they inhibit the proper detection of fire. With badly installed systems this could be the case but the issue here is not the aisle containment but the implementation of the fire detection and suppression systems.

Placement of fire suppression systems

While sensors are relatively easy to move, fire suppression systems are not. Many facilities have perfectly adequate systems in the roof of each hall but worry about how to deal with fire suppression in a contained aisle. One solution that is often talked about is extending the suppression system into the aisle. It might sound easy but in reality it is not. Without knowing

Open sesame: panels open to allow water mist/gases in...

exactly what is in each rack could mean that the amount of suppression being delivered through the system is insufficient for the fire. At the same time, the dispersal pattern of the suppressant may well be impaired due to the equipment in the aisle and where it is located. Having decided to leave the suppression systems where they are, the next concern is how to deal with enclosed aisles. One solution is to have a mechanism where the panels drop away when the fire alarm is triggered or thermal links. However, should there be anyone in an aisle when this happens, there is a risk of injury.


Fire is something that no data centre owner wants to consider but must plan for. While aisle containment systems are making it possible to lower power costs, especially on cooling, there is concern that they may inhibit fire detection and suppression systems. Providing that the installation team understand the need for placing sensors in the right place and an active roof design is chosen, aisle containment creates no more risk than any other form of data centre enhancement.

NETCOMMS europe Volume IV Issue 5 2014 23


IP costs and bandwidth needs slashed at Produce World

Cutting The Costs Of WAN traffic How Produce World cut its bandwidth needs in half...

Off-site data

One of the largest expert growers and suppliers of high-quality fresh vegetables in Europe, Produce World has found the key to improving the performance of off-site data replication. Since deploying Silver Peak’s wide area network (WAN) optimisation software, Produce World has increased data throughput, reduced traffic by 80 per cent and cut bandwidth allowance by 50 per cent. As a growing business with six locations across the UK, Produce World found that the data mobility between its primary data centre in Peterborough and disaster recovery (DR) site 90 miles away, was taking three times longer than expected because of its 40 Mbps network bandwidth allowance. This was making it increasingly difficult to meet its recovery point objectives (RPOs) and placed a huge strain on the IT department. As such, the company needed to consider optimising its existing IT infrastructure, with the ultimate aim of reducing its RPO from 24 hours to 15 minutes for Microsoft Dynamics NAV, and two hours for other critical systems.

Data demands

In order to meet the increased demands of replicating data, Produce World first considered adding more bandwidth before finally turning to leading IT services provider, Kelway, to provide a solution that would make better use of its existing IT infrastructure. For three months, Produce World carried out proof of concept on WAN optimisation providers, including Riverbed, but selected Silver Peak as the clear winner for its high performance, simple deployment and lower cost. In addition, Silver Peak stood out for its ability to easily integrate with the company’s existing NetApp SnapMirror replication environment. The company selected the Silver Peak VX software, which is purpose-built for storage replication and scales to over 1 Gbps connections. “As our business expanded, the need to get more data between sites 24 NETCOMMS europe Volume IV Issue 5 2014

was becoming increasingly difficult,” said Richard Billington, infrastructure manager at Produce World. “It was taking a whole weekend for the replication to catch up which, as you can imagine, was causing a real headache for the IT department. We tested other WAN solutions, but Silver Peak performed far beyond our expectations and requirements, and is enabling us to easily meet our RPO.” Following the Silver Peak deployment, Produce World was forced to move its DR servers to one of its own sites due to its then co-location provider closing down its data centre. Going from running on a 40 Mbps bandwidth link to a 6mbps connection could have resulted in disaster, however with Silver Peak software already in place, the company was able to continue replicating data on the temporary site efficiently for a period of around six months. Since moving to its new DR site, the Silver Peak implementation has meant the company only requires 20 Mbps of network bandwidth allowance, cutting

its previous bandwidth requirement in half and significantly reducing costs. “Without Silver Peak, it would have been increasingly difficult to replicate all of our data between the data centre and DR site over a 6 Mbps WAN link,” continued Billington. “However, we managed to keep the data centre up and running for a period of six months without any performance issues. Today, our 20mbps WAN link looks like it’s doing nothing at all – it’s just one of those things we don’t have to think about anymore.” http://marketplace.silver-peak. com

Produce World – a short history In 1898 Harry & Percy Burgess began growing vegetables in the Peterborough Fens and running a fruit & vegetable shop in Richmond. Ever since, the business has expanded using strong ethical values and today Produce World remains a privately owned business with active involvement from the fourth generation of the Burgess family. In 2011, Produce World brought all existing companies into the ‘Produce World Group’ creating one company, one team and one brand. The company says it works hard to develop a robust and structure approach to Corporate Social Responsibility (CSR). Building on a strong foundation of policy, it aims aim to drive targeted improvements in our social and environmental performance through our four CSR areas and to report the results. Produce World adds that it has stated its key commitments and now drives performance through a series of non-financial targets ensuring continuous improvement through regular monitoring, measurement and reporting. Produce World also has established targets at both a group and site level covering the issues that include: carbon emissions; energy usage; waste and recycling; water usage; health and safety; workplace performance

The new field-assembly, fully shielded and multiport-capable RJ45 connector in 180° or 360° version (variable cable entry) 

C6A RJ45 field plug pro

 fully shielded and multiport-capable  transmission properties Cat.6A to ISO/IEC 11801 Ed.2.2:2011-06  suitable for 10 GBit to IEEE 802.3an and for Power over Ethernet (PoE, PoE plus and UPoE)  type of protection IP20  suitable for cable sheath diameters of 5.5 bis 10.5 mm  zinc die-cast housing for industrial use, 2-part for 180° version or 4-part for 360° version  strain relief snapped on directly to stuffer cap and protected catch  reconnectable, when using the same or a larger cross-section  multiple cable outlet 4 x 8 positions possible (only with C6A RJ45 field plug pro 360°)

Cabcon an Acal group

Cabcon an Acal group Company

Company 3 The Business Centre Molly Millars Lane Wokingham Berkshire RG41 2EY

Unit 14G Maynooth Business Campus Maynooth Co. Kildare Ireland

Tel: +44 (0)1189 122 980 Fax: +44 (0) 1189 776 095

Tel +353 1629 2640 Fax +353 1629 2637


NETCOMMS europe Volume IV Issue 5 2014 25 Email


Why data centre load testing is now a `must have’ option

The Perils Of Ignoring Load Testing By Dave Wolfenden, Director, Mafi Mushkila


Dave Wolfenden explains the benefits of load testing in modern data centres...

Testing a data centre prior to formal handover to the client is a normal part of data centre construction, but few IT professionals look at the testing that is carried out, that can affect modern centres in the future. The usual reason for this is that, until a full set of servers and allied IT systems are in place, it is perceived as impossible to test a data centre under full loading conditions. This is actually an incorrect assumption, and one that seems to be perpetuated by a number of misunderstandings regarding the complex technologies involved in a typical data centre. In many cases airflow within the data centre is modelled using Computational Fluid Dynamics (CFD) modelling software during the design phase. In addition to the testing set out by the commissioning team the CFD model should be proven before the IT infrastructure is installed. The reality is that the testing of a good data centre needs to be carefully planned and executed to ensure continuous operation for the design life of the data centre. The data centre facility should be tested at a variety of load levels, working up to 100 per cent load. The majority of energy consumed by IT infrastructure is rejected as heat, meaning that the simplest way to replicate the IT infrastructure is the use of fan heaters. In the past these varied from 2 or 3kw domestic fan heaters to large floor standing space heaters to produce load. In most cases the safety thermal cut out had to be removed to cope with elevated temperatures within modern data centres. The heaters are often connected to temporary power supplies. These types of load do not reflect the airflow and temperature range akin to IT infrastructure and do not test the power supply end-to-end.

100 per cent

The CFD model at 100 per cent is likely to assume that the data centre is fully occupied with floor standing and 26 NETCOMMS europe Volume IV Issue 5 2014

rack mounted IT infrastructure. The reality is that during testing only some of the racks may be installed. To ensure the testing process is valid, temporary measures need to be in place to ensure the layout and load distribution to reflect the CFD model layout. These measures could include installation of temporary IT racks, blanking, construction of temporary walls / aisle containment and the implementation of heaters and server emulators that reflect the load distribution across the data centre. If the customer’s IT racks have been implemented the heat load should be connected using the power strips installed within the racks. This may be the only time that the power strips are fully loaded (and therefore completely tested). Whilst the latter two issues can be met using sensible planning, effective heat control is something of a science in its own right, as dissipating heat - from whatever source - within the data centre is a critical process. If carried out poorly or using unreliable technology, then a runaway heat problem can quickly turn into an IT disaster, shortening both system and server lifespan at best - and causing equipment failures at worst. Given companies increasing reliance on data centres to service the IT needs of their business, an equipment failure can cause a number of problems ranging from a temporary outage of telephony and computer services for staff and allied personnel, all the way to a failure of an organisation’s e-commerce web site - causing customer confusion, loss of brand loyalty and an ongoing loss of revenue.

ROI/cost issues

In an ideal world, a business could throw enough money at a data centre project to ensure 100 per cent uptime and happy customers, as well as staff. In the real world however - even in a mission-critical application - there are clear ROI (Return on Investment) issues that must be addressed when planning, testing and maintaining an effective facility. For most of our clients, this translates

to the effective testing of a data centre at all possible stages in its planning and development, all the way from the computer modelling aspect of the installation, right through to the test heat and power loading prior to the installation of the relevant IT systems and servers. So why do we need server emulators to complete the heat load testing process? The reason for this is that a new IT equipment room, data centre - or modular data centre - is designed and expected to run continuously for the duration of its design lifetime, which can amount to many years, even in today’s rapidly evolving IT arena. To achieve this level of reliability it is necessary to thoroughly test the infrastructure before it goes into operation, both physically – using test equipment – and using appropriate CFD software to model the airflow within a facility and provide a graphic analysis of how the hot and cool air flows. Using actual servers to complete the tests is not possible for a variety of reasons, including the cost of filling the data centre with servers, the potential for damage to IT equipment and the time it would take to reset servers after each test. Coupled with the need for fixed, predictable loading during testing, a server emulator provides a variable electrical load and produces a heat load. These loads allow the testing of the electrical and cooling systems in a controlled environment. On the electrical test front, the use of head load banks and allied systems can make life simpler for data centre developers and facilities managers, as well as on the power governance front, as they help prove the efficacy of static transfer switches under partial and full load conditions. As part of this element of the testing process, good testing equipment allows the thermal inspection of all joints and connections under a full load condition before the building becomes operational, so reducing the fire risk. One useful side effect of this process is that the electrical assessment process provides confirmation that power

for load testing of intermediate heat exchangers - which are usually installed to reduce water leakage loss in the suite, with capacities ranging from 100,000 litres all the way down to 250 litres. Other processes can also include the proving of fail-safe systems on highdensity racks - such as confirming doors will open in the event of in rack cooling component or system failure. On the water chilling side, the testing process normally requires load testing to prove that the chilled water ring has a sufficient volume of cold water to allow the chillers to restart when a generator kicks in, so negating the requirement to UPS-equip the chillers for resilience.

Commercial Risk

All of these methods are, we believe, a fundamental aspect of data centre testing as the comprehensive checking of electrical and chilling/cooling systems is infinitely preferable - on several fronts - than destroying a bank of servers. As an example, a rack of heaters can cost just a few thousand pounds, against the cost of a rack of servers that can cost into six figures. By including effective testing as an integral part of the commercial risk evaluation and mitigation process, our observations suggest that this supports a timely sign off for data centre and allied buildings, and their acceptance into service. Arguably and more importantly, by documenting a safe and reliable testing phase of a data centre deployment, this can act as proof to insurers that the systems are fit for purpose under full load, as well as providing high levels of assurance that the components and systems are set up and configured correctly. Nice racks - but what about the loads?...

monitoring and billing equipment is operating correctly, as well as minimising risks and issues that may not otherwise be found for several years. Allied to the electrical check process is the testing of ancillary systems

such as electro-mechanical and mechanical units, pumps, cooling and chiller systems, as well as Room Air Conditioning Units (RACU) where appropriate. These test processes are also useful NETCOMMS europe Volume IV Issue 5 2014 27


It’s all in the power consumption figures...

The Need For Power Flexibility By Andrew Roughan, Commercial Director, Infinity

Cutting costs

Andrew Roughan discusses how to save power in the modern data centre

Businesses are constantly looking at ways to cut costs, increase profit margins and gain competitive advantage, with many finding that IT has a major role to play in achieving these goals. Furthermore, as IT hardware costs have fallen, power is now an important variable in the cost stack. When looking specifically at managing power costs, flexibility is the key for businesses. Most organisations are familiar with pay as you go public cloud services, but where is the equivalent to this approach for retained IT assets? Fortunately help is at hand with a new way to consume wholesale and retail colocation with the advent of a ‘burst’ power option. This enables organisations to commit to a power level that varies over time to reflect what has actually been used. This type of service provides a competitive price point for power usage over the committed consumption, enabling companies to ‘burst’ in order to cater for peaks in demand without being charged the full unit charge until this becomes a profile of their normal operation. 

Faceplate values

IT equipment manufacturers power rate their equipment to a figure that is based upon the maximum consumption possible, factoring in the highest current draw potential of every component. This is referred to as the ‘faceplate’ power of the equipment.

This leads to many data centres being built with over provision of power, as in practice the IT equipment used consumes almost half the faceplate value, even at full utilisation. This issue is compounded by the fact that most colocation providers charge a fixed infrastructure charge based upon the power capacity availability to the rack. Therefore, even though actual power draw is accurately measured and priced as a utility charge, the colocation customer is being charged an infrastructure fee based upon power provision that is rarely, if ever used.

Needing to burst

Businesses benefit from a burst provision when they have peaks in demand, such as seasonal campaigns, where additional needs increase the amount of power required from a data centre. Businesses are constantly under the scrutiny of the boardroom in regards to how money is spent and this applies to data centre budget. Whereas colocation brings large capex savings and economies of scale in shared infrastructure, any organisation that experiences spikes in demand is presented with a challenge when it comes to procuring colocation services to house their IT equipment. Despite requiring a low level of power in a data centre for the majority of the year, server utilisation will need to be increased exponentially. Currently there is little option but to procure data centre services to cater for the maximum demand when using retained IT assets. This leads to many companies procuring data centres with an over provision of power. The ability to commit to a lower level of power provision would reduce costs dramatically, as long as there is certainty that extra power is available, when and if required.

Maximising power savings

The data centre: all lit up, but at what cost?...

28 NETCOMMS europe Volume IV Issue 5 2014

dramatically improved via workload management of virtualisation software - which aggregates computing capacity across a collection of servers and intelligently allocates the available resources among the virtual machines. This means that virtualisation keeps servers busy, but at times when there is spare capacity, servers that are idle can still be drawing between 20 and 60 per cent of their maximum power draw, whilst contributing nothing to the IT estate. VMware Distributed Power Management (DPS) solves this issue by providing intelligent power management of the collection of servers in a cluster. This brings the idle power of servers down to a theoretical zero power draw. Therefore, remaining cost competitive when dealing with peak demands is not only possible, but should be at the forefront of the IT strategy. There is no need for over provision in order to meet peak demands - just pay for the power used, when it’s used.

Demanding peaks

Paying for the power used when it’s needed seems a logical and viable way for businesses to keep their standard data centre costs down and just pay for the additional power required during peak times. A service that allows for an almost instant, dramatic increase of power enables organisations to have a minimum committed power consumption that varies over time to reflect the actual utilisation. It seems a simple concept, but traditionally the data centre provider needed to allocate the maximum power as part of the contract to ensure that the correct consumption level was available. With technology innovation and the change in the way businesses can manage their IT infrastructure, this concept is now a reality for those needing to burst through the normal power usage.

The virtualisation of IT provides another way to make big power savings. Utilisation rates of servers can be


...AND SAVE MONEY WITH OUR WISENETIII IP SOLUTION At Samsung we understand that the decision for when and how you migrate to an IP security solution is a complex one, influenced by many factors. Our new range of WiseNetIII network cameras have both an analogue and IP output, as well as onboard SD card recording. This gives you complete control and flexibility to make the right decision to suit your business. Integrate WiseNetIII onto an existing analogue system, whilst recording Full HD onto the SD card, or take advantage of the dual output and record locally to your analogue recorder whilst simultaneously viewing remotely utilising the IP output. You don’t have to throw away the investment you made in your existing equipment – helping to improve Total Cost of Ownership!

Contact us for further information




Solving remote issues with technology

The Next Generation Branch Office By Dave Greenfield, Product Marketing Manager, Silver Peak


Dave Greenfield explains the benefits of next-gen branch office architecture...

Branch offices have long posed a challenge for distributed organisations, since it is not efficient or cost effective to maintain IT staff and resources at each location. Yet, despite this, many offices require local compute, storage and networking resources to meet all organisational application requirements. To deliver the ‘lean’ branch office, organisations have often either resorted to public cloud services that might fail to meet IT requirements for control and security, or attempted to integrate branch functions into expensive, proprietary shared infrastructure platforms that package the physical infrastructure – server and storage – with the requisite IT software. These shared infrastructure platforms may appear to provide IT with competent solutions, but in reality offer limited feature innovation, restrict software choice and lock the buyer into a future of costly hardware upgrades.

Branch issues

Although remote offices differ, certain challenges are common to most environments: Need for increased resource costs of the branch compute, storage, and networking infrastructure Protracted provisioning time as IT delivers, configures and installs equipment before the branch becomes operational Branch survivability as wide area network (WAN) infrastructure failure can cause branch office failure High management and maintenance costs given the lack of IT expertise on-site Data protection and security are often limited, as effective safeguards may be lacking Data restoration may take far too long, particularly for tape-based backup - this limits performance in recovering data for a given user Delivering physical appliances may be expensive and time-consuming, particularly for overseas or remote locations Attempts to address these issues have led some businesses to consolidate infrastructure functions into their 30 NETCOMMS europe Volume IV Issue 5 2014

appliances, such as routers or WAN optimisation hardware. These approaches are attractive in so far as they simplify network deployments, but often come at a price. For example, when proprietary hardware requires custom developed silicon there are the research and development costs. As such, when businesses invest in off-the-shelf servers they are faced with a premium due to numerous factors including procurement, development and quality assurance testing. Furthermore, high availability (HA) features may not be provided, forcing organisations to double the number of appliances on site for maximum uptime. The bottom line is that proprietary appliances often double the cost of comparable off-the-shelf servers, and even triple when requiring HA. The capital costs of proprietary appliances are particularly significant given that they’re often unnecessary. Most organisations have a surplus of compute cycles, which is a major catalyst behind the adoption of virtualisation. IT functions can often run on existing servers, sharing the underlying hardware with other virtual appliances, and still deliver comparable performance to stand alone appliances. Operationally, proprietary hardware extracts a heavy penalty. Indeed, sparing becomes more difficult and costly, as components must be acquired, typically costing 30 per cent or more than comparable equipment on the market. Choice can also be limited as customers are restricted to a set of applications and services that can run on these platforms, whether due to the constraints of the hardware or the business strategy of the manufacturer. Finally, changes to the product line frequently force unnecessary, costly, hardware upgrades. Even the perceived benefits of simplified delivery and deployment may not be fully realised, as organisations could still need to deploy switches, routers and other functions in the branch, which are not included in the shared infrastructure platform.

The next generation

Whilst the theoretical benefits of an integrated appliance remain sound, the execution has often been flawed – until now. Next-generation branch architecture avoids these problems by separating the underlying hardware from the software and leveraging the advances in virtualisation and branch server designs. Organisations gain a deployment architecture that is cost-effective and powerful, allowing the continued use of their existing tools and software. Central to this strategy are the shared infrastructure platforms that combine all of the core branch office services – compute, storage and networking – into a single, integrated unit. These branch office platforms use the manufacturing and production expertise of server providers to lower costs, giving IT exceptional manageability and value without sacrificing agility. By applying real time intelligence, organisations can monitor paths for increases in packet loss or latency, and switch traffic to an alternative line before a failure occurs. With security a topical issue at present, WAN optimisation solutions are also starting to include accelerated IPSec, which protects data through virtual private network tunnels between locations. To reduce management costs and shorten deployment cycles organisations can use the shared infrastructure platform to consolidate all branch office storage, networking and compute requirements in one device. Branch resources run within the shared infrastructure platform as virtual machines on a standard hypervisor. IT can therefore enforce best practices while still locating critical resources at the branch. The hypervisor management platform can be used for automating server maintenance tasks and monitoring resources, which minimise the need to troubleshoot remote servers and desktops in person.


Will Cat 8 change the face of data centres?

The Cat 8 Cabling Revolution By Ken Hodge, CTO, Brand-Rex


Ken Hodge explains how Cat 8 is set to become a rack-level standard...

Work is rapidly advancing on Category 8 copper cabling - Cat 8 – a technology that looks set to find its applications primarily in data centres. Like its predecessors this new BASE-T standard will be later to market than its twin-ax and fibre based competitors, but when it arrives it will rapidly displace them because of its far lower cost. Cat 8 will become the mainstream technology for rack-level interconnects in the data centre. However, unlike earlier Gigabit and 10 Gigabit technologies it will not have a 100 metres range and so it will not support centralised switching with passive patchpanels at row level, except in smaller server rooms.

The need for speed

Cat 6A supports the fastest BASE-T solution currently available (10GBASE-T) and is the de-facto choice

for data centres as specified by the TIA/ EIA for North America and by ISO/ IEC internationally. In the data centre, individual devices require ever faster interconnects. For example, one physical server now runs maybe ten virtual servers – and so the physical interconnect must handle roughly ten times the data. And then there is the seemingly unstoppable move towards streaming more and more video, plus the upcoming wide-scale adoption of ultra high definition ‘4K video’ and ‘Big Data’ that is going to affect a lot of data centres in coming years. All of this means that, as data centre professionals, we would be very unwise if we did not forecast that much higher bandwidths will be needed. Already in certain high performance computing data centres (or HPC sections of data centres) we see that 10Gb/s is not enough. Solutions such as bonded multiple 10Gb/s copper or

fibre channels and 40Gb/s or 100Gb/s fibre channels are being deployed. Also - as happened in the early days of Gigabit and then again with 10Gigabit/s – and now with the 40 & 100Gb/s technologies – the first solution set of cabling products that were standardised (and commercialised) included fibre optics for short, medium and long reach connections, as well as multiple twin-ax copper cabling for high-speed, short-range, solutions for links to the top of the rack. These solutions, we have observed, are already handling the early-adopter need for very high speed interconnects at 40Gb/s & 100Gb/s. Unlike BASE-T, these short-range twin-ax solutions don’t need all of the complex signal processing that is required for longer-length channels. As a result, they are far quicker to develop and bring to market. The downside is that the cables and connectors are extremely expensive. Whilst these high costs are not really an issue in earlyadopter applications, they are totally unaffordable in the data centre massmarket. And that is where a BASE-T has historically come in around two years later at a fraction of the cost. I predict that a similar cycle will happen with 40Gb/s Cat 8 is still in its early days of development and it will be a good year or more before we will really know how it will look technically. But it is almost inevitable that, once standardised and productised, its cost per link will quickly drop to a fraction of the twin-ax and fibre based alternatives. It will be the solution of choice for mass connection of equipment; commercial imperatives will drive its adoption.

What is Cat 8?

10Gbps - and faster, with the right technology...

32 NETCOMMS europe Volume IV Issue 5 2014

Currently there are a number of similar but different ‘Cat 8’ solutions being considered by the standards bodies for 40Gb/s over twisted pair copper. In the USA, the TIA/EIA is considering Cat 8 based on an extended performance Cat 6A cable. Meanwhile internationally ISO/IEC is looking at two options currently tagged Cat8.1 based on an extended performance Cat 6A cable and Cat 8.2 based on an

extended Cat 7A cable. Interestingly all of these are based on shielded cables and connectors because of alien crosstalk difficulties. As yet, there is no clear choice of connector - though there is a significant body of weight in favour of the RJ-45 footprint rather than the larger ‘square’ contender. This is partly in order to achieve high-density patch panel and switch configurations and partly because RJ-45 is what almost everyone in the industry is used to and comfortable with. If the RJ45 footprint is adopted it will meet the maxims of interoperability and backwards compatibility favoured in the market. This is clearly the most attractive route for the industry itself as it will allow IT managers to specify Cat 8 knowing that they will not compromise existing installations nor limit the supported technologies on the cabling. It looks likely that an RJ-45 profile jack will be used, however the choice is not simple - there are different styles of RJ45 connector that have different electrical performance levels. The essential differences between the connectors are that contact pins are in a single flat row in the RJ45 type and the pins in the ARJ45 type connector are positioned at the four corners. Whilst the ARJ45 has a better electrical performance than the RJ45 (because of the separation of pins) it is not backwards compatible with Cat6A, 6 etc. The silicon designers involved in the IEEE project have not yet decided if they will take advantage of the better cabling performance solution and use less processing technology or work with the standard RJ45 solution and add more processing power. The decisions on technology have yet to be taken although it looks like RJ45 will be the preferred option we cannot be sure today. Whilst the choice of a new connector type to support the new application is unlikely to create a problem in a new data centre that is designed at the outset to support 40Gb/s, we could anticipate some inherent problems with this approach in established data centres. For example, if Cat 8 horizontal cabling was installed in a data centre

that operates lower speed applications (e.g. 1 & 10Gb/s Ethernet and fibre channel technologies) that are based on RJ45 connectivity, hybrid ‘Cat 8 to Cat 6A cords’ would be required to attach end equipment, and true Cat 8 cords would be needed when the equipment is installed to migrate to 40Gb/s speeds. In addition, if Cat 8 is moved out of the data centre to the horizontal in ‘future proof ’ building installations, the lack of backwards compatibility will be a real issue. A new connector type might not be so acceptable if higher speeds (40Gb/s) do reach the enterprise LAN.

Topology considerations

In an ideal world, the LAN connectivity would place no constraints on the designer’s choice of architecture or topology. But the world is is seldom ideal. BASE-T standards have always (until now) been based on a 100 metres channel length. However back in 2008 Brand-Rex launched a data centre Zone Cable product that had a maximum reach of 70 metres. Our research had shown that this would cover 85 per cent of existing data centre link requirements and that, with only a minor amount of re-planning, a data centre could be designed to use 100 per cent of this Zone Cable. The massive advantage that made it worth the designer’s efforts was that our Zone Cable gave Cat 6A 10Gb/s performance, but instead of being the ‘thickness of a small garden hose.’ it was as thin as a Cat 5e cable. This was - and is - a major positive benefit both inside racks and under the floor, where thick cables create air dams and cause expensive cooling inefficiencies. The technology also has a lower carbon footprint and saves weight compared to conventional cabling. In the early-adopter implementations of Gigabit, early 10Gb/s and now early 40Gb/s a major topology change has been essential. This is because of the very short distances for twin-ax based copper links, which has always meant that an expensive Top-of-Rack switching topology is essential. Later, as the BASE-T solution for each of these speeds became available,

Networking complexities...

designers were able to have total flexibility to choose cheaper EoR (end of row), MoR (middle of row) or, in many cases, centralised switching with passive in-rack or in-row patching.


The situation with Cat 8 will be different - this is because it is not going to have 100m or even 70m link-length capabilities. No firm decisions have yet been made on its link distance capabilities but 30m looks likely, but allowing for some crosstalk, the technology could possible reach to 50m. This issue is going to affect the way that connectivity solutions in the data centre will need to be designed, if they need (or will ultimately need) 40Gb/s and the cost effectiveness of BASE-T. Gone will be the option for centralised switching, as EoR or MoR switches become essential to stay within the 30m or 50m reach of the network cabling. Interestingly network planners are already discussing, and in some cases implementing, a move away from hierarchical switching in the data centre to a flatter topology distributed or mesh approach; which ties in well with an inrow switching configuration. So perhaps this apparent constraint with 40Gb/s copper will not be a real constraint after all.

NETCOMMS europe Volume IV Issue 5 2014 33


Understanding the demand-driven networking world

The Power Of Optical Field Testing By Jonathan Borrill, Director, Anritsu

Simpler life

Jonathan Borrill explains why multi-protocol support is now a minimum requirement

In the past, life was simpler. Optical fibre cabling was used in the core network and in undersea cables, and only performed the network’s grunt transport-layer work. As a result, technicians involved in optical fibre repair and maintenance only required a basic set of testing capabilities, doing little more than verifying whether a cable installation was sound or not, and identifying and locating breaks, dirty connections and other physical faults. That simpler world is disappearing, as end-user demand for high bandwidth in the home or office – and, increasingly, on the move via the mobile phone – has led to the extension of optical fibre to the edge of the network. It was as long ago as 2009 that the International Telecommunication Union (ITU) enhanced the Optical Transport Network (OTN) standard G.709 to support its use in metro networks. It has taken until now for the industry to see a marked acceleration in such deployments of OTN technology. But now the effect on optical fibre technicians will really begin to be felt.

Fault pinpointing

Increasingly, a technician will not be able to pinpoint a fault in an OTN system or verify the link without testing the actual signals the OTN is carrying. The technician will need to be able to

Anritsu tester: supporting multiple protocols...

34 NETCOMMS europe Volume IV Issue 5 2014

rapidly complete measurements of all OTN interfaces including ODU0 and ODUflex, as well as Ethernet, Fibre Channel and SDH/SONET at rates up to 10 Gbps. Many OTN systems will also be supporting legacy PDH and DSn interfaces encapsulated within SDH/SONET traffic. In other words, the extension of OTN into metro networks means that the test instrument on which the fibre technician relies is called on to support multiple different signalling protocols. So why is multi-protocol support now a basic requirement for an optical field test instrument? The answer is complexity (see Figure 1). When a technician is called to a site to troubleshoot an OTN metro network, the exact nature and location of the problem will often not be apparent at a remote control centre. Arriving on site, the technician may perform a set of tests on the OTN layer, but only be able to pinpoint and diagnose the problem by examining the client signals carried over the section of OTN cable in question. But will the root cause of the problem become apparent through measurement of SDH signals? Ethernet signals? Fibre channel? Or other protocols carried over the OTN? It is often impossible for the technician to know in advance. This means that the only way to avoid the risk of carrying the wrong instrument to the job is to carry an instrument that supports all the main client signal types – or to carry multiple instruments, each supporting a different set of protocols and the related switching equipment to multiplex the client signal into the OTN interface under test. Clearly, it is more cost-effective to carry a single instrument. It also makes for quicker and more effective problem resolution, since all measurements and reports can be initiated and viewed through a single user interface. The new, extended scope of the optical testing performed by field technicians underlies the development of the MT1000A Network Master Pro, a new optical test instrument from Anritsu. The MT1000A is a portable,

compact and user-friendly all-in-one transport tester aimed at technicians who install and maintain mobile-access, fixed-access and metro as well as core transmission telecoms networks.

Multi-protocol support

Because it offers multi-protocol support, the MT1000A lets network operators equip their technicians with a single instrument to cover all field-testing needs. In other words, it is designed to streamline the operator’s investment in equipment. Multi-protocol support is not the only way in which it does so. The product can also be configured to support dualport testing at all supported interfaces and rates. The two ports can be used independently, effectively providing the user with two instruments in one physical device, making the field tester’s work more productive. Two-port testing can also be used for in-service bi-directional monitoring of live traffic links, providing a new ability to maintain and optimise existing networks and test networks while they are in operation. This enables operators to pinpoint problems faster and so reduce the duration of network downtime. In addition, the instrument’s simply user interface also helps guide technicians through the measurement process and helps to minimise the requirement for training in the widening range of technologies carried on the OTN. Of course, operators and technicians will want to study a wide variety of features provided by the optical testers available from reputable suppliers, including the user interface, test software, and scripting and reporting capabilities. But as a bare minimum, today’s testers should provide comprehensive protocol support to reflect the new role that the OTN is playing beyond its traditional base in the core network.

Network Testing Redefined

All-in-one field tester with full OTN mapping The NEW MT1000A Network Master Pro - All-in-one field tester redefines the direction of future test platforms by bringing all your network test

set printing by 2 colors (Anritsu Green and black)

requirements to a portable device. OTN, MPLS-TP, Ethernet and Fibre Channel are all supported in addition to full support for legacy networks including PDH/DSn and SDH/SONET. The MT1000A - handheld, powerful and easy-to-use network testing.

Europe 44 (0) 1582-433433 Š2014 Anritsu Company

k screen printing by 2 colors (Anritsu Green and black)

NETCOMMS europe Volume IV Issue 5 2014 35


Converged networks need a holistic approach to meet their po

Taking A Holistic View On Networks By Joseph Raccuglia, Director, Alcatel-Lucent


Joseph Raccuglia explains the challenges of an open platform

With Enterprise networks shifting focus towards application delivery, traditional siloed network environments are failing to handle the sheer volume of data let alone the needs of a mobile workforce using multiple devices and the demands of real-time and multimedia applications. Converged networks and the automation of network management functions are offering organisations the promise of a high quality experience, reduced administration and better performance delivery, but not all solutions are made equal. With all the talk about the possibilities and potential of converged networks, it is important to remember that it is ultimately business necessity that will be driving network implementation in the enterprise. There are of course different approaches to delivering a converged network, and these are often simplified and separated into two perspectives: infrastructure and management. Making the decision to update and

converge your network hardware is a big decision as it is likely to underpin the IT strategy and technological potential of a business for years to come. For an organisation that wishes to deliver high quality applications to a mobilised work force, the network must understand devices as well as associated applications. Contextual understanding of conversations between devices and applications makes it possible for the network to optimise the user experience and network performance, while at the same time lowering capital and operating expenses. A lot has been talked about the value of Software Defined Networks (SDN) in the building of a converged and coordinated infrastructure and its ability to enable infrastructure components to automate the administration. But this is still in its infancy and most business environments will not see an end-to-end solution, rather a hybrid approach with traditional infrastructure and versions of SDN working alongside each other.

Rip and replace?

With converged networking, there is no need for a Rip and Replace approach. The transition can be gradual. We can already provide a lot of network automation to cope with application changes through virtualisation. The key question is how we can bring together the infrastructure pieces that may be at different levels of capability? When considering the needs of an organisation, it’s important to properly audit the capability of the entire network architecture, to see whether it can handle the applications the business requires and whether it can deliver these applications across the network to end users’ devices. There are four key questions that need to be addressed and answered when planning a move to a converged network to enable businesses to get real value from their infrastructure. 1. Overlay networking is only as good as the underlying infrastructure. Is yours up to the job? 2. Will you have the visibility required for optimised service delivery? 3. Will you be able to customise your converged network to fulfil your requirements? 4. Will your users get the access and experience they need to benefit from an improved network?

Complex networks demand a holistic view...

36 NETCOMMS europe Volume IV Issue 5 2014

Firstly, overlay networking is only as good as the infrastructure beneath - so check it out. Some of the solutions available on the market today are not truly converged at the hardware level and simply opt for a software overlay to define storage, network and compute functions which are still based on industry standard components. Overlay networking on top of a traditional infrastructure is a way of tying together disparate components of a network that have different levels of capability. With this approach it is possible to deliver bits of a converged and coordinated service, which is a quick and easy way to introduce some of the benefits of converged infrastructure. But overlay networking, while meeting your provisioning requirement,


s will not necessarily solve the performance issues - the overlay will only ever be as good as the underlying infrastructure, and if the infrastructure is not good enough, performance will be a growing issue. Infrastructure that is built on a switching fabric that creates a ‘mesh’ network formed by connecting smaller ‘pods’ consisting of several directly connected servers, provides the ideal underlying infrastructure to start to develop converged networking. Audit your architecture and validate that you can meet the latency requirements of your applications - if you cannot meet them now, you certainly will not be able to meet them in the future.

Visibility and control

Secondly, visibility and control are key to delivering quality service - will you have it? The difficulty with traditional networks lies in their limited awareness of the applications that are generating traffic and, conversely, the new virtual application control systems are unaware of the conditions prevailing within the network. Application Fluent Network (AFN) technology provides the solution. The goal of a converged network is to bring automation to the data centre by providing a coordinated approach to network control and enabling a centralised view on network conditions. With an Application Fluent Network (AFN), enterprises can enjoy a network that understands devices as well as associated applications. Contextual understanding of conversations between devices and applications makes it possible for the network to optimise the user experience and network performance, while lowering capital and operating expenses. So while SDN is bridging the gap between the network world and the virtualised compute world by defining a framework that uses standardised interfaces between applications and networks, AFN technology provides a rich policy infrastructure that enables everyone to work autonomously or in a coordinated fashion. AFN also adjusts and optimises an entire corporate IT infrastructure

ensuring it can seamlessly support real-time applications - and this will be essential to businesses of any size in order to deliver mission-critical applications securely to employees, no matter where they are.

Vendor agnostic

Thirdly, vendor agnosticism is the key to customisation and programmability make sure you aren’t locked in! One of the fundamentals of true SDN is that it should be vendor agnostic, allowing for control and management of the network through SDN controllers, rather than through separate vendor silos that require administrating individually using proprietary protocols. The promise of SDN is to create the ability for customisation and programmability that can deliver the functions, applications and services that are unique to each organisation. This is why it is vital to use hardware from vendors that use standards based technology, so as to avoid the classic vendor lock-in and be restricted to proprietary technology from one vendor, or a limited handful of select third party partners. This is why open protocols such as OpenFlow - enabling SDN through the management of network hardware from multiple vendors - or OpenStack’s software tools for building cloud computing platforms are so important, as they provide the freedom to build and customise networks as businesses see fit. With closed or propriety protocols you will always have restrictions on what customisations you can make, and the levels of programmability you can achieve, hindering the flexibility that SDN is designed to offer networks and service delivery.


Finally, empowering the user is the goal of network services - but will you be able to? One of the most important to things remember is that a network is more than just a data centre, and it goes from the core all the way to the end user, wherever they are. A business needs to

have the required capacity across the whole network. If modern business networks are about application and service delivery, then the user experience must feature highly in the minds of providers and solution architects. Converged networks need to focus on the entire user experience. A holistic view needs to take into account the access infrastructure from data centre to WLAN, LAN and cabling to WiFi infrastructure in office locations. The key question to be asked is whether the converged network will have the programmability capabilities for end-to-end delivery of applications - and with the right priorities for a corporate environment. Here a Unified Access (UA) solution is essential, supported by a converged Application Fluent Network to deliver secure, high quality and consistent applications. UA provides a single policy infrastructure that does everything from automating the on boarding of new employees with the correct levels of access and device control, ensuring QoS across the entire converged network. Unifying the management and user experience across wired and wireless environments brings visibility and control of the applications across the whole network, supporting the prioritisation of business critical functions which ensures employee have the high quality tools they need to perform.

Meet your potential

It’s clear that network infrastructures need to adapt if they are to keep up with rapid developments we are seeing with multiple devices and multimedia applications being brought into the workplace. Converged networks certainly provide the answer, but organisations must consider the four key questions above before opting for a network upgrade - not all solutions are made equal, and it pays to set off on the right foot, from the word go. http://enterprise.alcatel-lucent. com

NETCOMMS europe Volume IV Issue 5 2014 37


Understanding the need for enhanced optical networking testin

Encircled flux and optical loss analys By Adrian Young, Senior Technical Support Engineer, Fluke Networks


Adrian Young explains the importance of encircled flux technology...

Encircled flux is becoming increasingly important in optical fibre testing as loss budgets become tighter and data speeds increase. Loss budgets for optical fibre testing are becoming increasingly tight as more low loss components such as LC/ MPO cassettes are introduced. As a result, consultants and cabling vendors are beginning to specify loss budgets based on component performance, not standards. Any allowable slack in testing practices has disappeared and to stay current, installers need to re-evaluate their test equipment and procedures for Tier 1 optical fibre testing. Using an appropriate reference light source, setting the reference and using reference grade connectors are all vital parts of the testing jigsaw. However, even when all of these are carried out correctly, two different testers can still achieve a variability of up to 40 per cent. This, unfortunately, is the nature of optical fibre testing. The fibre connection is considered random, hence the need to have a reference grade connector.

TIA standards

TIA standards initially defined the launch condition from a multimode optical source in terms of Coupled Power Ratio (CPR). However, this still allowed too much variability between sources, so in 2010 the TIA and IEC created encircled flux (EF) to define

checking and rechecking for problems...

38 NETCOMMS europe Volume IV Issue 5 2014

launch conditions on multimode optical fibre (see ANSI/TIA-526-14-B or IEC61280-4-2). EF refers to the ratio between the transmitted power at a given radius of an optical fibre core and the total injected power. It specifies the modal power across the entire end face of the test reference cord, which has to be maintained to the end of the cord. At some point we can expect EF compliance to become mandatory in many optical fibre test specifications, so installers need to begin considering now how to incorporate it into their field test procedures. It is important to note that the initial definition of EF by the TIA and IEC assumed that installers were already implementing best practices for optical fibre field-testing. Anyone working in technical support will know that this assumption is often untrue. To achieve optimum results when carrying out a Tier 1 optical loss measurement, all four aspects of testing must be set up correctly: the LED source, the jumper reference, the connectors and the encircled flux requirement.

Setting the reference

Setting the reference incorrectly can lead to optimistic and negative loss results. The latter are the largest cause of failed system acceptance and warranty denial, and suggest an amplification of the optical system, which is impossible in a passive system. We have observed that the most common causes of error when setting the reference include setting a reference through a bulkhead adapter, which adds an uncertainty of up to 1.5 dB due to loss in the adapter (see Figure 1). It is worth noting that the standard 25 mm mandrel will not strip out the higher order modes at 850 nm and will perform as if there was no mandrel, whereas a 4mm mandrel would make 1300 nm measurements incorrect.

The right optical source

In theory users can test multimode optical fibre links with either a Vertical Cavity Surface Emitting Laser (VCSEL) or an LED. However, current ANSI/ TIA and IEC regulations specify that the source must have a spectral width between 30nm and 60nm. This is easily achieved with an LED source. However, a VCSEL source has a spectral width of just 0.65nm, and its launch into the optical fibre varies substantially between different VCSEL sources, increasing measurement uncertainty to a point where it is no longer acceptable. The VCSEL launch is also under-filled, resulting in an optimistic loss measurement reading. If the optical fibre system is tested with a VCSEL, the cabling vendor may not accept the application warranty due to the uncertainty of the measurement. It is the responsibility of the individual who is testing and providing the warranty for the system to ask what type of source is to be used.

Figure 1: setting a reference through a bulkhead adapter is incorrect To achieve reliable results the tester should use equipment with interchangeable adapters on the input ports. This allows for setting a 1 Jumper reference in accordance with TIA, IEC and cabling vendor requirements. It is also important to purchase the correct adapters and test reference cords.

Reference grade

Bad cords lead to poor and inconsistent test results. ISO/IEC 14763-3 Testing of Optical Fibre Cabling defines reference grade connectors as having a loss of 0.1 dB for multimode and 0.2 dB for single mode. When the low-loss cassette has a 0.15 dB LC connector, testing it with anything worse than a 0.15 dB


sis LC connector is going to result in a pessimistic result or potential failure. If using a 1 Jumper reference, the test reference cords can be verified. Once the 1 Jumper reference has been made, the cords are removed from the input ports. Another test reference cord is then inserted into the input ports, the main and remote units are joined together using a single mode-rated bulkhead adapter and the test is run. The loss result should become part of the system documentation. TIA-TSB-4979 describes two methods for meeting EF requirements. The first is to use an external launch conditioner, which replaces the mandrel. This can turn any LED source into an

EF-compliant solution, avoiding the need to buy new test equipment. However, external launch conditioners are expensive, bulky and need to be replaced when the connector at the end breaks. Many data centre managers operating to tight optical loss budgets tell installers to include the cost of the launch conditioners in their bid. The second option is to use an EF compliant source with a tuned reference cord that strips out the unwanted modes (see Figure 2). Although this is a proprietary solution and requires the purchase of new test equipment, the cords are less expensive and bulky than launch conditioners.

Figure 2: encircled flux test reference cords To address poor referencing, test equipment vendors are creating automated wizards to walk the technician through the reference procedure process. content/right-first-time-fiber

NETCOMMS europe Volume IV Issue 5 2014 39


xxxxxxx Why labelling cables and other kit is so essential

Network Labelling Is Now A Must-Do By Charlotte Hilton, Managing Director, Sharpmark Solutions


Charlotte Hilton explains the importance of network labelling...

When contractors are dealing with multi-million pound data centres and comms installations, labelling can be considered small fry. However, there are some solid reasons for paying the matter very close attention. From my experience of working with customers in the datacomms cabling industry over the last 23 years, I am aware of the great importance that labelling plays in a project. Despite the fact that labelling may constitute only a minor part of the overall cost, getting it right is crucial. Installers are under more pressure than ever before to complete projects in less time and at a lower cost. Due to the size and complexity of many projects, timescales are often shifted and squeezed - and the time available for final checking and labelling is often pressurised. Labelling is often implemented on the last day on site and without complete and accurate labelling the project may not be signed off, meaning payment may not be released. From the client’s perspective, labelling is a very visible part of the project when it’s complete. Consequently, in the eyes of the client, the quality and professionalism of the labelling reflects the quality and professionalism of the whole project, so it has to look perfect. In addition, labelling’s role is vital in assisting engineers to trace network problems. Clear and accurate labelling

will mean problems are solved faster, reducing downtime and the associated costs to the company. Incorrect, incomplete or inaccurate labels cause project delays and increased costs. It may mean that engineers have to return to site to complete the labelling when a project should have already been finished, increasing costs and taking them off another job. Delays in sign-off can mean penalty charges or payment delays, so affecting cash flow and profitability.


Installers are increasingly using Traffolyte-style engraved labels for labelling their installations, which provide a highly professional finish, which is being increasingly demanded by their clients. Clear and sharp labelling, permanently applied with the option of colour-coding gives the client confidence in the whole job. Engraved labels also have the benefit of great durability, so will last as long as a 20-year warranty, for example. In the past, time was allocated for engineers to produce labels on site, using hand-held label printers. Not only does this result in a less professional appearance, it’s a time-consuming, laborious task that isn’t cost-effective use of an engineer’s time for large quantities of labels. When the pressure is on to complete a project, every minute counts.

Rapid delivery

Clear labelling counts

40 NETCOMMS europe Volume IV Issue 5 2014

There is, however, still the issue that the exact labelling scheme and the data for the labels are sometimes not determined until right at the end of the project. Labels are often required the next day, which can be the last day on site. This was a problem for traditional engraved labels, which require a 2-3 day turnaround. However, at Sharpmark we have invested in unique bespoke software that enables us to produce labels with greater speed and production accuracy. This software, combined with highspeed laser engraving, enables us to provide better quality engraved labels and deliver them the next day.

Scenariio Intelligent Infrastructures specialise in the design, implementation and support of solutions deployed over traditional structured cabling solutions. Mark Palmer, the firm’s managing director, says that, as a long time installer of structured network cabling solutions, Scenariio know the importance of using high quality labelling systems. “Within our industry we have many clients who have to adhere to strict installation regulations when running cabling through sensitive food production areas or in military ducts, for example, where clear and precise cable labelling is mandatory,” he said. “We require a good range of colours, typefaces and mounting options that allow the client to design their own numbering identification systems. As our industry evolves and new technologies are developed we need things like high density fibre patch panel labels that were always a challenge to label in the past,” he added. Palmer went on to say that, in short, his company’s customers want to work with a supplier who understands their requirements and makes their job easier, reducing the stress and pressure that builds at the end of a project. “They [also] want to receive high quality labels, at the right time and the right place – at a good price,” he explained.


At Sharpmark we have all the latest dimensional data from equipment manufacturers so our customers are confident that their labels will fit exactly. We also receive data in any format, which we can then convert to suit our system - we appreciate how precious the installer’s time is. From the foregoing, I hope it should be clear that labelling may represent a small part of the cost of a project, but our observations are that getting it absolutely right at the very end can influence the client’s view of the whole project.


active products Austin Hughes solutions provide data centre managers and administrators instant secure, local and remote access control to mission critical equipment. Our leading edge Cyber ViewTM LCD drawer and KVM (leyboard, Video and Mouse) Solutions provide the widest range, available on the shortest lead-times in the European market today whilst ensuring capital equipment and software management costs are kept to an absolute minimum. Our InfraSolution® and SmartPDU® products enable data centre operations managers, IT administrators and facilities managers to enhanace rack level security and equipment effeciency by using remote rack IP door access with swipe card control, temperature & humidity monitoring including intergrated monitored and switched rack PDU’s. To lower energy consumption, make more informed capacity planning decisions and improved operational efficiency our InfaPower® locally metered, remotely monitored and switched rack PDU’s are designed for use across the network, either locally via serial or over IP. Austin Hughes Europe Unit 1, Chancery Gate Business Centre Manor House Avenue, Southampton SO15 0AE, UK Tel + 44 2380 529303 Email: Web:

network infrastructure products Cannon Technologies is an international leader in the design and manufacture of IT infrastructure. From fully featured server racks, high density cooling and power management to remote control systems all under BSI - ISO 9001 :2008 Cannon Technologies has serviced some of the world’s leading organisations and is the ideal partner for challenging projects. Taking our 35+ years of experience in the market Cannon Technologies has launched a completely unique modular data centre solution that will dramatically alter the way everyone views modular build techniques. The design is based on existing, market proven solutions and can be deployed in a fraction of the time required for traditional modular builds. Offering a wide range of in built features such as: Power protection; Power management; Cooling; Fire detection & suppression; Environmental & security monitoring; Low PUE. Cannon Technologies Ltd Queensway, New Milton Hampshire, BH25 5NU, UK Tel: +44 1425 632600 Email: Web:

cable management Cablenet Trackmaster Ltd is an importer and distributor od networking, cabling and power products. As well as a wide range of imported copper and fibre optic cabling products and computer cables Cablenet also distributes for a number of best breed vendors. Cablenet has one of the UK’s widest ranges of copper patch cables in stock, with cables available in 11 different colours and lenfths from 0.3mtr up 30mtr, and also has in house a manufacturing facilitty to produce cables to your own specifications Call our sales team on the contact details below for more information on this. Our sales staff are very knowledgeable about the products we sell,w ith particular expertise in Cabinets, KVM and UPS. Our 18,000ft2 southern logistics centre is within an hours drive of central London and 30 minutes drive from Heathrow airport marking Cablenet an ideal partner for intergrators and installers who serve the UK, international financial markets and overseas customers.

network infrastructure products Datwyler UK Ltd has become iDaC Solutions Ltd. The name may have changed, but our high quality products, services, personnel and direct supply model remain the same. As the sole source for Datwyler Cabling Solutions in the UK and Ireland, iDaCS will adopt a business as usual approach – continuing Datwyler’s 30 years of trading in the UK market. iDaC Solutions Managing Director, Paul Cattell, is pleased to announce that “this change enables us to re-launch the Datwyler Cabling Solutions brand as part of an integrated system of products for the data network, elevator and fire safety markets. By increasing our solution offering to include complementary products and specialist services, iDaC Solutions can provide clients with an even greater level of flexibility and responsiveness to their changing business challenges”. To find out more please call +44 (0) 2380 279 999 or email

Cablenet Trackmasters Ltd Cablenet House 2A Albany Park, Frimley Road Camberley, Surrey GU16 7PL UK Tel: +44 1276 405 300 Fax: +44 1275 405 309 Email:

network infrastructure products Creating perfect connections is Metz Connect’s core competence. The personal commitment of the founding family characterizes the international success of the independent, medium-sized enterprise group, which together with its subsidiaries pursuers the company’s goals with a high degree of responsibility. Highly innovative, efficient processes and partnerships have characterized the Metz Connect Group for decades. The company’s brands RIA Connect, BTR Netcom and MCQ Tech offer a diverse, innovative product portfolio with highly specialized connector components that satisfy with highest quality. Metz Connect Ottilienweg 9 78176 Blumberg Deutschland Phone +49 7702 5330 Fax +49 7702 533 433 Email: Web:

network infrastructure products

Established for 30 years, Comtec provides the trade with one of the most comprehensive product portfolios for building and maintaining communication networks. We stock everything from structured cabling and tooling to specialist fibre optic and copper test equipment and aim to deliver quality products at the lowest possible price, next day. • • • • • • •

ADC KRONE premier distributor Nexans cabling solutions Cooper B-Line cabinets Over 5 ,000 product lines stocked Volume discounts FREE technical support Easy ordering by credit card or Trade Account

Orderphone: +44 1480 415400 Orderfax: +44 1480 454724 Email: Web:

NETCOMMS europe Volume IV Issue 5 2014 41


network infrastructure products

Cray Valley is a leading distributor of Networking, Cabling Infrastructure and IP Physical security products and prides itself on the innovative range in its portfolio. With a market leading Wireless LAN product from Extricom that has a unique single Wireless blanket giving it a number of technical advantages unavailable to traditional cell based wireless systems. The innovative and comprehensive range of IP door access/IP cameras/IP Environmental monitoring from Axxess ID coupled with excellent technical back up support offered across the range from leading manufacturers, Cray Valley offers a partnership of choice to its customers. This is complemented with a full range of High speed RF and FSO links, with all products having free training courses available from the manufacturer. Our Cabling infrastructure Systems from Siemon, Nexans and Matrix are well respected Global manufacturers with a full range of Cat5e/ Cat6/Cat6a and Cat7 and Fibre. Cray Valley Communications Limited Unit 11, Concorde Business Centre Airport Industrial Estate Westerham, Kent TN 16 3YN, UK Tel: +44 1959 573444 Fax: +44 1959 572172 Web:

network infrastructure products Mills is a leading distributor of structured cabling, cable management and specialist tooling for the communications industry. With a stocked product range of over 4000 lines, Mills is the one stop shop for your cabling infrastructure requirements.

network infrastructure products Excel is a worldclass premium performance endto-end infrastructure solution – designed, manufactured, supported and delivered – without compromise. Excel is driven by a team of industry experts, ensuring the latest innovation and manufacturing capabilities are implemented to surpass industry standards for quality and performance, technical compliance and ease of installation and use. Since the brand was conceived in 1997, Excel has enjoyed formidable growth and is now reported in the latest BSRIA UK market report as the 2nd largest structured cabling brand with 17% share of the UK market in 2013. The system is also a growing force in markets across EMEA and is sold and supported in over 70 countries. Excel European Headquarters Excel House Junction Six Industrial Park Electric Avenue Birmingham B6 7JJ UK Tel: +44 (0)121 326 7557 Email: Web:

network infrastructure products Minitran is a leading distributor specialising in structured cabling systems, networking, audio visual, home automation products and security systems and has been established since 1989. Brands include Panduit, Nexans, Hubbell, Rittal, Dataracks, TE Connectivity, Belden, Schneider, Abitana, Domintell, Aten, Austin Hughes, Planex, Draka, Acome, GeoDesy, Noyes, Sharpmark, Psiber Data, Greenlee and our own Mini5/6 range.

• Cabinets & Enclosures

• Cable Preparation &

• Structured Cabling

Termination Tools

• Fibre Optics & Tooling

• Power Tools

• Voice Products

• Contractors Tools & General

• Structured cabling • Home automation • Audio visual •

• Active Products

Hand Tools

Fibre optic • Voice products • Cable protection • Ethernet

• Coaxial and Audio Visual

• Overhead & Underground

switches • Wireless LAN • Enclosures • Power products • Test

• Power Distribution

Cabling Equipment

equipment • Security

• Trunking & Cable

• Safety Equipment


• Test Equipment

• Tool Kits & Tool Cases

Mills is the premier distributor of the full Fusion structured cabling system range. Established over 90 years, Mills is an IS09001 and Investors In People certified company. Free catalogue on request. Mills Ltd, 13 Fairway Drive, Fairway Industrial Estate, Greenford, Middlesex. UB6 8PW, UK Tel: 020 8833 2626 Email: Web:

42 NETCOMMS europe Volume IV Issue 5 2014

Comprehensive stock is held in our warehouse for next day delivery or collection. Orders placed up to 5.30pm are despatched the same day. Our experienced sales and technical support team provide free advice and assistance with design. Minitran Ltd Unit 5 Myson Way, Raynham Road Industrial Estate Bishops Stortford, Hertfordshire CM23 5JZ Tel: 01279 757775 Fax: 01279 653535 Email: Web:

network infrastructure products The Fusion Product range represents the outcome of two years of market research and focus groups to establish installers and users expectations for an end-to-end network cabling system. Altogether better because .. Completely integrated - so everything fits together Cost effective - ensuring maximum return on investment Fast to install - every aspect of design optimised to save time Comprehensive range - providing a complete solution No excess packaging - save time opening packs and minimise impact on the environment 25 year warranty - providing peace of mind • • • • • • • •

Cat5e Cat6 Fibre Voice Coaxial Audio Visual Cabinets & Enclosures Cable Management

Fusion, PO Box 556, Greenford, UBS 9JS, UK Tel: 0845 370 4709 Email: Web:

network infrastructure products With over 15 years of experience in the telecommunications training industry, we provide the most relevant and real life Data Centre Training courses available in the Europe, Asia Pacific, USA and EMEA regions. Our Data Centre Courses have been geographically localised for all the locations we deliver in. The Data Centre Training we offer is accredited by the largest number of independent organisations of any communications training provider these include IET, The CPD Certification Service, City & Guilds, BTEC (Edexcel), and BICSI. The Certified Data Centre Design Professional CDCDP™ and Certified Data Centre Management Professional CDCMP™ training courses bring together the essential components of proficiency to independently certify that an individual is a highly skilled and knowledgeable data centre professional. The data centre qualifications validate the individual’s ability and knowledge in standards, compliance and the application of global best practices in the design, creation and management of preeminent data centres. CNet Training Park Farm Business Centre, Farnham Saint Genevieve Bury St Edmunds, Suffolk , IP28 6 TS, UK Tel: +44 1284767100 Fax: +44 1284 767500 Web:

11 – 12 November 2014 RDS, Dublin Cloud & IT Security Ireland is a NEW independent Conference & Exhibition at which Enterprise and business organisations can see the latest solutions available and receive independent practical information on the business arguments, software, technology and solutions they need to make better informed decisions. The Conference Utilising a combination of Case Studies, Panel discussions, Technical papers and interactive forums the conference will showcase the latest in new ideas, software, solutions and Best Practice. The Exhibition Featuring leading companies, brands and value added resellers this is your chance to and compare the latest in technology, software, innovative solutions and source the suppliers who can assist you.

Themes addressed will include: •

What are the available options

How do I assess my future needs

Considerations when migrating to the cloud

Does one size fit all?

Security and the Cloud

Future Technology

Virtualisation and Storage

Big Data

11-12 Nov 2014 RDS, Dublin. Co-Located for success Cloud & IT Security Ireland benefits from being co-located within DataCentres Ireland the leading IT technology infrastructure event in the country.

To register your interest and receive more information contact Hugh on +44 (0) 1892 518877 or email

The Environ range of racks and open frames from Excel are designed to make installation and ongoing use quick, easy and efficient. The breadth of designs and products available make them ideal for a wide range of applications in the enterprise, data centre, security and professional audio visual market.

Simplicity by design Performance by design Flexibility by design Specification by design Services by design

44 NETCOMMS europe Volume IV Issue 5 2014