Page 1




A special supplement with


Sharp focus

Intelligent infrastructure management (IIM) provides a clear picture of a network’s physical layer

CONTROL IT ALL WITH VERSIV™ One single tool for testing, troubleshooting and handling even the most complex cabling certification projects

GET READY TO OVERACHIEVE THE VERSIV PLATFORM ACCELERATES EVERY STEP ON THE PATH TO SYSTEM ACCEPTANCE. For a start, its revolutionary ProjX™ management system puts you firmly in control of multiple projects. It is also designed to get more jobs done right the first time with less supervision. The Taptive™ user interface makes the Versiv as intuitive as your smartphone. Our LinkWare management software drastically simplifies analyzing test results and creating professional test reports. And a unique, future-proof and modular design lets you switch between copper certification, OLTS, OTDR and Wi-Fi analysis with the greatest of ease. No more unnecessary errors. No more separate tools. Versiv: get ready to overachieve.



Learn more at



Simply plug in a different module for copper certification, fiber certification or Wi-Fi analysis.






Publisher Dominic De Sousa Group COO Nadeem Hood

The missing link The influence of the bring-your-own-device (BYOD) phenomenon has spread wide in the workplace. On the cabling side, it has raised fresh awareness for intelligent infrastructure management (IIM), a solution already popular amongst networking circles. The traditionally passive structured-cabling infrastructures that connect network devices have somewhat clashed with modern real-time network management tools, which have proved useful with Middle East CIOs who are keen to keep a complete view over their networks. IIM solutions essentially provide the missing link between the two, and allow a platform to increase productivity and agility, whilst also providing a stronger infrastructure. Further benefits will be realised dependent on each individual network — CIOs with disparate IT systems, for example, will find particular joy with the technology. Network challenges have proliferated in recent years as trends like cloud, virtualisation, Big Data and mobility, which links to the aforementioned BYOD, continue to threaten traditional IT environments. With these new technologies promising fruitful benefits for those keen to invest — or perhaps also those who are scared of being left behind — the importance of a safe, reliable, and fully available and optimised IT infrastructure is stronger than ever. IIM systems use scanners and intelligent hardware components to provide control and visibility of the cabling connecting and active equipment, as well as moves, additions and changes in real-time. They enhance security and identify unused ports and network assets by monitoring physical change to the infrastructure, thus maximising network utilisation. Furthermore, they are able to integrate with all manner of IP devices, such as cameras, CCTV, door access and intruder alarms. Read more about IIM from industry experts in this issue of Cabling Planner, and learn how to better establish an ‘intelligent network’ to ultimately get more from existing IT infrastructures.

Managing Director Richard Judd +971 4 440 9126 Commercial Director Rajashree R Kumar +971 4 440 9131 EDITORIAL Group Editor Jeevan Thankappan +971 4 440 9109 Editor Ben Rossi +971 4 440 9114 MARKETING AND CIRCULATION Database and Circulation Manager Rajeesh M +971 4 440 9147 PRODUCTION AND DESIGN Production Manager James P Tharian +971 4 440 9146 Design Director Ruth Sheehy +971 4 440 9112 Graphic Designer Analou Balbero +971 4 440 9104 DIGITAL Digital Services Manager Tristan Troy Maagma Web Developers Jerus King Bation Erik Briones Jefferson de Joya Social Media & SEO Co-ordinator Jay Colina +971 4 440 9100 Published by

1013 Centre Road, New Castle County, Wilmington, Delaware, USA Head Office PO Box 13700 Dubai, UAE Tel: +971 4 440 9100 Fax: +971 4 447 2409 Printed by United Printing Press Regional partner of

© Copyright 2013 CPI All rights reserved While the publishers have made every effort to ensure the accuracy of all information in this magazine, they will not be held responsible for any errors therein.


The complete picture Network monitoring devices are important assets to IT executives who need to manage devices connected to their organisation’s network. With the trend of bring-your-owndevice (BYOD) continuing to grow, the need to keep a complete picture over what is going in and out of an enterprise network has become of paramount importance. 4 CABLING PLANNER



etwork connectivity documentation is a continuous dynamic process that changes on a frequent basis. It is very difficult to keep track of what is going on with the physical layer, and what is happening to both physically and virtually connected devices. BYOD is only exacerbating this problem. That is where intelligent infrastructure management (IIM), or automated infrastructure management (AIM), as it’s also known by, comes in. It ultimately holds the key to combating these problems by providing information on the physical layer of the network.

“IIM combines software, controllers and intelligent patch panels to create a tool that allows you to add management of the physical layer into your holistic strategy for network management,” says Ehab Mohammed El-Kanary, Director of Sales, KSA and Egypt, CommScope. “An ‘intelligent network’ without physical layer-1 Intelligence is effectively not a true intelligent network. This is an essential piece that is missing in many of today’s network monitoring toolkits.” The IIM system catalogues the configuration of the physical infrastructure in the network, including the devices attached at each end of the network data channel.

This includes the creation of work orders, physical maps and other information to provide an understanding of physical connections. “IIM should actively monitor the configuration and connectivity of the physical infrastructure and the connectivity of the devices attached to the network, and alert when authorised and unauthorised changes occur,” says Sam Huber, Product Manager, Molex Premise Networks. “It also must provide reports on the current state of the physical infrastructure, and documentation on how the current state was arrived at.”

In many installations, the physical infrastructure is managed with “on-board tools” such as Excel spreadsheets and Visio graphics, says Shibu Vahid, Head of Technical Operations, R&M MEA and Turkey. “Sometimes, even paper, pencil and post-it notes are used,” Vahid says. “You quickly reach your limits though if you try to apply these methods to large data centres or complex building cabling systems. “Incorrect, out-of-date and unreliable documentation make changes to the infrastructure something like walking a tightrope without a safety net. Sensible expansion plans and risk analysis are simply impossible. The purpose of an IIM is to facilitate the management 2013 CABLING PLANNER 5


of the passive infrastructure in the future. “In an IIM system, the entire infrastructure is represented in a consistent database, a single 'source of truth'. This database provides precise and real-time information on the current state and future requirements of the data centre.” Working towards an ‘intelligent network’ Many organisations strive to better align IT with their business operations. Those who succeed have a high degree of flexibility that allows them to change their business strategies to adapt to global, economic, and market trends — which can change quickly and often. A better-managed and intelligent network’starts with a better understanding of what network management actually is, and there are several dimensions to effective management, which El-Kanary expands on. “First, successful businesses must be able to depend on IT infrastructure that’s fast, flexible and able to adapt quickly to fast-changing market trends in our 24-hour world,” he says. “At the same time, these businesses must also minimise expensive and revenue-impacted downtime. “Considering that almost a third of downtime can be attributed to human error, a better-managed network should display greater independence and intelligence, proactively alerting administrators to small problems before they become big problems and expensive downtime. “On top of all this, a better-managed network must bend the cost curve to its advantage, both in the short and long term. This means reducing the network’s energy and space requirements, as well as mapping out an intelligent upgrade path. An intelligent network through an effective IIM implementation will deliver all these advantages so your business can stay ahead of a fast-changing marketplace — and ahead of the competition.” As networks grow in complexity, and the frequency and velocity of changes across the network increases, intelligent physical layer management must be a part of the intelligent network. Complete and current knowledge of the physical network provides an anchor



Ehab Mohammed El-Kanary, Director of Sales, KSA and Egypt, CommScope to locate where the logical elements of the network are, Huber adds. “As applications become less tied to physical servers and more to virtual machines that are hosted on a server, an intelligent network will need IIM to be the anchor to insure the physical network can support the logical elements on the network,” he says. There is an ongoing trend in the Middle East towards the appreciation and adoption of advanced technology, and IIM is included in that. Now that most companies are on their way out of the recession, there is a tendency to unearth buried projects and invest more into IT, as companies start to understand that it is in the technological advances where their competitive edge lays.

“The IT leaders of today with entrepreneurial skills will help drive their companies to success, and there is an increase in talented IT experts ready to prove themselves,” El-Kanary says. “Taking the rights steps into implementing intelligent networks is imperative.” Huber says several trends have contributed to pushing the market toward adoption of intelligent networks. “Automation of any documentation and troubleshooting tasks enables IT resources to respond to and resolve physical layer issues in a more efficient and timely way,” he says. “The increasing complexity of networks and applications demand better knowledge of the physical network and the impact changes in the network have on applications. “The increase in regulatory, and compliance requirements and initiatives, mandates having detailed knowledge and control of all aspects of the network, including the physical network elements.” A greater need According to Vahid, the need for IIM systems will increase considerably in the near future because the physical infrastructures in data centres are becoming ever more complex and can only be controlled completely with automated technology. In addition, there are several standards that indirectly make IIM necessary. “These are the standards, guidelines and recommendations dealing with the secure operation of data centres and risk management, and include the ITIL (IT Infrastructure Library), ISO/IEC 20000, ISO/

"the entire infrastructure is represented in a consistent database, a single 'source of truth'. This database provides precise and real-time information on the current state and future requirements of the data centre.”

IEC 27001, Basel III and SOX (Sarbanes Oxley Act),” Vahid says. Whilst the main benefit of an IIM solution remains having current information on the configuration of the physical infrastructure, disaster recovery, and asset and inventory audits, also come into play. “Monitoring and maintaining the status of the physical configuration and updating records as changes occur, as well as informing resources that a change has occurred, is another benefit,” Huber says. “A significant amount of all downtime is associated with physical changes that are incorrect, unauthorised, or had unintended consequences. IIM systems enable visibility into these changes. The sharing and communication of physical infrastructure information and changes in the configuration with other systems increase the ability to correlate incidents with root causes, and identify the impact a change has on the application and logical elements that use the physical network.” In an IIM system, the entire infrastructure is represented in a consistent database. As such, inquiries into resources such as server ports and space in cabinets, as well as about energy requirements and cooling capacity, are quick and easy to answer precisely with this database, Vahid adds. “Improved capacity utilisation of the existing infrastructure, as well as the simple and exact planning of changes and expansions, are immediate advantages of IIM systems. They enable controlled modifications to the infrastructure. If the IIM hardware is installed in the field, installation engineers are given direct support in their work. “IIM accelerates patching and helps avoid errors and mix-ups. It monitors the installation in real-time and detects any disruptions to operations. This reduces repair times and also increases the availability of the system. Further functions of IIM are risk analysis and protection against disasters. IIM systems can be used to enact scenarios and plan emergency measures.” Deploying IIM In order to deploy IIM, there is a minimum equipment configuration required for the server computer. “Additional memory, disk space and

Sam Huber, Product Manager, Molex Premise Networks processing power may greatly enhance overall productivity,” El-Kanary says. “Running other applications on the same system is not recommended as it may degrade the performance of system manager or cause interoperability problems. We also strongly recommend using a backup (uninterruptible) power supply for your server computer.” According to Huber, the goals set by the customer for an IIM implementation should drive the development of use-cases and tests the team will utilise to design the system and procedures. “Defining up front what will make the IIM implantation successful will keep the project team focused on what is necessary to achieve success, and avoid activity that does not contribute toward meeting the goals,” he says. “Define the systems that will interact with the IIM system and develop processes that will maximise the benefit IIM information can bring to other systems. Also, provide training for IIM system operation in addition to the new processes and procedures.” Furthermore, the choice of technology and system is very important as it could either raise limitations or could facilitate options for expansion, Vahid adds. “The system configurations need to be done wisely with the help of templatebased modeling with an intuitive user interface,” he says. “Open architecture systems are recommended for easy

integration with third-party systems. “Contactless data acquisition with no influence on data transmission is a preferable solution, as it does not violate well established cabling standards for connecting hardware, and therefore presents a neutral and viable solution for any future transmission upgrade requirements.” When it comes to measuring the allimportant ROI, the first step is to create measurable goals, Huber advises. “Be specific about how IIM will provide value to the business,” he says. “Translate the additional information IIM will provide into specific outcomes that will benefit the business. “Understand the current costs imposed on the business that IIM will reduce or eliminate. This includes the opportunity cost-of-activities currently done by the IT staff that IIM can automate, which prevent the staff from completing activities that are of higher value to the business. Look beyond the immediate beneficiaries of IIM systems to other areas such as inventory, disaster recovery, scheduling of IT resources, and compliance for additional benefits.” One high-profile end user that took full advantage of IIM, by partnering with CommScope, was Saudi Arabia’s Eastern Province Municipality. When cabling complexity became a problem, the Municipality looked for a new, unified network infrastructure for its data centre. Together with a network connecting nearly 3000 municipal employees, all of the Municipality’s e-services are supported by a data centre in the provincial capital, Dammam City. As the range of services provided online grew, so did the scale and sophistication of the data centre. By 2011, it started to present some network infrastructure issues. “IT projects delivered in recent years by various vendors resulted in cabling infrastructure so complicated that maintaining it was becoming difficult,” says Sami Algrooni, the Municipality’s IT Manager. “To remedy this, we wanted a unified approach that would let us improve organisation of our infrastructure and eliminate ‘patch cord spaghetti’ in communications cabinets. This enabled us to trace problems and provide support quickly and efficiently.” 2013 CABLING PLANNER 7


Flexible data centres With demand for storage and networking capabilities skyrocketing, in-house and hosted data centres are being put under greater pressure than ever before. So how do you future-proof a data centre? The truth is, you can’t, says Ciaran Forde, VP, Enterprise, MEA, CommScope. What he does advise is to build a data centre that is capable of evolving over time.


he construction of an in-house data centre or the leasing of space in a collocation facility can require considerable upfront investment, but finding the money needed to fund this isn’t the only problem enterprises face. The demands on an enterprise’s IT infrastructure are growing so fast that many new facilities are out-of-date by the time the move from planning and commissioning to the physical infrastructure rollout completes — a process that typically takes around two years. Faced with challenges like these, it’s easy to see why enterprises are now looking at what they can to ensure that their IT infrastructure can quickly adapt to future demands. But with demands on data centres growing faster than ever, achieving this goal is no easy task. The question of how to future-proof a data centre is often asked, but the truth is you can’t. There is no single way to prepare a data centre for the future. However, what you can do is build a data centre that is capable of evolving over time to meet changing demands and take advantage of new innovations.

Better planning Many businesses have miscalculated their future needs when planning a new data centre and built one that is too small for their needs. Fearful of making this mistake, many more have built data centres that are far bigger than what they actually require. The advent of virtualisation was to blame for many mistakes like this a few



years ago. Businesses built data centres on the proviso that they’d only be able to run one application per server. Then along came virtualisation and suddenly they could run multiple, virtual servers on a single physical machine, conserving a vast amount of power and money. All of a sudden they had lots of spare servers gathering dust. Although virtualisation is now commonplace, many businesses still encounter problems with their data

so it’s vital you make sure that your data centre can grow with your business. Rather than building a data centre you can ‘grow into’, it’s best to build one that you can easily expand in stages as and when necessary, without too much disruption or expense. At the very simplest level, this can mean making sure that there is enough physical space available on site to add new cables or servers in the future. So that changes can be made with

"Many businesses have miscalculated their future needs when planning a new data centre and built one that is too small for their needs. many more have built data centres that are far bigger than what they actually require." centres that stem from poor planning in the building phase. In order to properly plan a data centre, it’s important to take a holistic view of the design and build. In some cases, people have been put in charge of different sections of a new data centre and a lack of communication has led to problems later down the line. To avoid any issues like this, there are a number of planning tools that you can take advantage of. A Data Centre Infrastructure Management (DCIM) solution will give you a view of the data centre as a whole, allowing you to manage all parts of the data centre, and see how they connect with one another. This can help you to control power costs, reduce the risk of downtime, and boost operational efficiency. In addition to this, you can also use data centre capacity planning tools to calculate how much capacity you will need in the future as your requirements change.

Pain-free upgrades Every business’s needs change over time,

ease, businesses are increasingly deploying modular data centres that can plug into existing systems. IDC expects around 220 modular data centres to be deployed globally during 2012, up from 144 last year. A modular data centre is essentially a purpose-built module containing servers, and standardised storage and networking components. Some modular data centres will also contain cooling systems. These modules can be attached on to existing data centres. This approach is typically much quicker than building a bespoke solution to expand an existing data centre or building a brand new one. To make upgrades even easier, many solutions are now fitted with preterminated cables that can be plugged-in to other components, eliminating the need for cables to be wired on site and the risk of mistakes being made by engineers. The impact of upgrades on performance is also a concern for companies in the age of the ‘always-on’ business. Fortunately it is now possible to upgrade data centres whilst they are in service, which means that

there is no need for IT professionals to make changes to data centres outside of normal working hours.

Limitations As Robert Burns put it, the best laid schemes of mice and men often go awry, and the data centre is no different. Despite how much planning you undertake, there can always be surprises that crop up. By taking the time to plan properly and using all of the tools that are available to you, your business’s data centre should be much better placed to overcome whatever challenges are thrown at it in the future. Financial constraints can also prove a limitation for some enterprises. Building a data centre does after all require a substantial up-front investment. For this reason, it can be tempting to reduce expenditure by using cheaper equipment, but this can turn out to be a false economy. Rather than looking at what is cheapest, businesses should look to investment in quality technology that will provide the best value in the long term. Take Category 6 and Category 6A cables for example. Whilst cabling a data centre with 1Gbps Category 6 Ethernet cables can support the needs of most data centres today, many enterprises will look to move to 10Gbps Category 6A cables in the next few years as data traffic continues to grow. Although it may be less expensive to install Category 6 cables today, installing Category 6A cables could work out more cost effective in the long term, as it negates the need for an upgrade in a few years time. This is one of the reasons why the cabling standards bodies recommend Cat 6a cabling as the minimum requirement in the data centre cabling standards. All things considered, it’s worth putting in the extra effort to make the right decisions for your business.  Whilst laying the groundwork to meet future needs may seem a daunting task now, prudent planning for the long term will prove a sensible investment for any enterprise looking to build a new data centre or upgrade an existing facility. 2013 CABLING PLANNER 9

inter v iew Nexans Cabling Solutions

Artificial intelligence

Tarek Helmy, Regional Director Gulf, ME and East Africa, Nexans Cabling Solutions, provides further insight into intelligent infrastructure management (IIM).


s the implementation of an IIM solution necessary to achieve an ‘intelligent network’?

By combining information from different systems in real time, operational managers can monitor, record, and take action on any event which occurs on the network. This could be a patch change in the comms room, a connection or disconnection of a user terminal, the activation of a security alarm, a door controller, an IP camera, PDUs monitoring and controlling, or a temperature sensor. Although these individual events can be identified without IIM, by including the physical data into the process, the precise location of the event or device is also known. The benefits are immediately apparent. If a problem is identified which needs personal involvement, the right level of technician can immediately be sent to the correct location without wasting expensive time looking for the problem in the first place. Asset management becomes a reality when the whereabouts of each asset can be tracked, identified, and even checked to see if it has the latest firmware upgrade. In addition, by linking together these disparate events into a central point, the overall system becomes much more powerful than the sum of the components. Different events can be programmed to deliver different responses — providing a system that really is ‘intelligent’.

What are the market trends toward ‘intelligent networks’? Today, there is a significant global market for IIM systems, as ISO and IEC standardisation



bodies have recognised. However, one of the barriers to widespread adoption is the difficulty in scoping an implementation. In order to help solve this problem ISO has been contributing to the necessary ‘standardising’ of requirements for IIM and its interaction with other systems, such as BMS, HR and Fire. Two standards (one ISO/IEC and one CENELEC) are addressing this issue. The ISO/IEC14763-2 standard specifies requirements for the planning, installation and operation of cabling and cabling infrastructures. This includes cabling, pathways, spaces, earthing and bonding, and covers “administration” where IIM resides. The Annex and ongoing activities regarding IIM standardisation will help IIM claim its place next to other current building management systems. Not surprisingly, there is considerable support for higher level standardisation on this subject. Furthermore, interconnection between different manufacturer’s solutions and third-party applications could be strongly supported through a data exchange solution, for example by helping to create a workable DCIM solution built around an IIM core.

What are the main benefits IIM solutions provide? IIM solutions significantly reduce network downtime by automatically detecting faults, security breaches, device changes, and connection issues, whilst identifying the most likely location of the incident. They help in increasing productivity and efficiency of IT management processes, and provide valuable information/intelligence from the event log. Using the ability of an IIM solution to identify the actual location of an IT asset in real time, IT managers can physically pinpoint an offending device within seconds of an incident occurring. The advantage of IIM over and above pure patching is that technology can deliver above listed benefits with greater accuracy, and at lower cost, than a manual equivalent.

What are the typical challenges encountered during an IIM implementation? In general it is fair to say that the above mentioned benefits and cost savings will be realised by an operational team whereas the upfront cost may well be budgeted from an entirely different department. Also, simply installing ‘intelligence’ on its own will neither solve the spaghetti problem nor deliver ROI. Intelligence cannot replace

tidy cable management. Similarly, improving the productivity of maintenance staff will not deliver savings if the time saved is not utilised effectively for additional tasks. In order to obtain maximum benefit from an automated infrastructure system, it is necessary to review the internal processes and procedures and adapt working practices as a result.

How can organisations overcome these challenges? A consultative approach is necessary and it is very important to work closely with the end client to understand where the operational savings can be realised. This may be from improved maintenance productivity, reduced downtime, meaningless business loss, enhanced security, or even reducing cost by enabling remote or outsourced maintenance.

How can businesses quantify and measure the ROI of an IIM implementation? The key issue is to recognise that different types of business or application will have different needs and perceive different benefits. Although this will always include a ROI justification, cost savings may manifest themselves in completely different ways. Large scale enterprise has been the historical market for IIM on the basis of cost savings obtained from fast effortless MACs and accurate asset management. Available switch ports can be identified and utilised, saving on unnecessary additional switches. In data centres, IIM can assist by ensuring the regular expansion requirements make optimal use of any existing ports whilst maintaining accurate records of the complex array of interconnections. Security is also an important feature which can be improved. Heat and power supply are also critical

"If a problem is identified which needs personal involvement, the right level of technician can immediately be sent to the correct location without wasting expensive time looking for the problem in the first place." What best practices would you advise in both implementation and then running the solution? When implementing IIM, it is vital to consider the implications for building automation, self documenting BMS, selfhealing and quarantining unknown devices. Human resources and IT need to be closely involved, with links to HR data providing basic user information. If IIM is used in data centres, it is important to consider implications regarding data centre capacity planning, as well as making sure planned changes can be carried out as intended. For running the solution when implemented, the best practice is that ‘ownership’ of the tool is clearly allocated to a department and a ‘key user’ or ‘champion’ is being appointed to ensure the tool is being used and most of its rich functionalities are applied.

operational issues for data centre managers and IIM can also be used to monitor, report, and alert on any abnormal events. The accelerated transition of building management devices onto a converged IP platform introduces many benefits for integrated control. VoIP, IP CCTV, door access control, and intrusion detection devices can all be physically pinpointed, monitored, recorded, and controlled via a central interface, reducing the complexity of the physical cabling required to support them. ROI calculations are different as in each case the potential savings can be realised in different areas and would make a different contribution to the overall project. However, independent post installation analysis on some of the earlier intelligent installations has demonstrated that the initial ROI estimates have proved to be fairly accurate, typically being around three years. 2013 CABLING PLANNER 11

oP I N I O N R&M

A costly component Data centre cabling is of strategic importance to the performance, reliability and flexibility of the data centre, says Shibu Vahid, Head of Technical Operations, R&M MEA and Turkey.


he modern data centre has come to represent the unification of a range of cutting-edge technologies coupled into a single complex entity which, although tucked away from all are a select group of IT engineers, facilitates the smooth operation of business as we now know it. The role of IT in meeting ever-growing business demands is fueling the increased density of servers, storage and networking devices, and most importantly, data centre cabling. Data centre cable management is one of the most important aspects of data centre design and operation. The performance, reliability and flexibility of the data centre are all tied in strongly to the systematic execution of this on-going activity. The lack of cable management impacts serviceability and availability, whereas a good strategy can enable rapid and dynamic scaling of the IT infrastructure while minimising the downtime required for these changes.

Cut risks, not costs Tight budgets, coupled with the expectation to still deliver the best technology solutions, mean that IT managers are sometimes tempted to cut corners and often the physical infrastructure is the first target of such efforts. At first glance, opting for lower quality cables and connectors may seem like a viable option. But given that cabling generally accounts for a mere five to 10 percent of the over cost of the data centre, the trade-off rarely reaps dividends in the long run. Low-quality cabling can create multiple points of failure that are often difficult to pinpoint. Investing instead in cabling solutions which meet industry standards can provide the robust physical backbone upon which the modern data centre can be built.

Use patch panel in racks Another means employed to reduce costs is to perform cable connections by plugging crimp-on connectors directly into networking equipment.



But any changes down the line would result in a great deal of confusion since the haphazard nature of this approach makes management a nightmare. Opt instead to use patch panels in each rack whenever possible. This is possibly the most effective method for cable management. While this does entail an initial investment, the long term benefits of simplified management; and speed and flexibility at the time of upgrades and changes will justify the expenditure. Furthermore, the high-density of these solutions can lead to vital space savings in the data centre. New high density (HD) patch panels offer 48 RJ45 ports in a single compact unit. This means less room needed for cabling allowing a larger number of active components to fit into a cabinet. HD panels therefore provide a foundation for building up modern high-performance data networks operated with 10 and 40/100 Gigabit Ethernet. Furthermore, thanks to their inherent strain relief, the cables lie in a straight line and cannot be twisted. This is an important prerequisite for stable signal transmission in copper cables and enhancing the longevity of the cables. Large organisations can further simplify cable management by utilising angled patch panels. These allow data centre managers to make better use of the space at the front of the rack. Instead of threading patch cords though horizontal brackets at the front, they can be laid in side brackets and then plugged directly into the connector sockets. This method is also easier on the patch cords because they no longer have to be bent and threaded into the clips at the front of the rack. The best way to employ these patch panels would be to utilise angled patch panels within high-density areas, such as the cable distribution areas and straight patch panels at the distribution racks.

Cable naming and colour coding Logical naming that will uniquely and easily identify each cabling in the data centre can greatly accelerate cable

tracking and thus the troubleshooting process. While these naming conventions simplify management, it should be noted that identifiers such as server name, port name and switch name should not be a part of the scheme as active equipment is changed far more frequently than the cabling infrastructure. Efficiency can be further increased by the colour coding of cables according to the purpose of the connection. Different colours may be assigned for things like cabinet to cabinet, backup network, storage area network (SAN), public network

solutions conserve valuable data centre real estate. Pre-erminated copper or fibre cables are easily customised, easily installed and since they eliminate cable slack, they improve air circulation in data centres and thus the over-energy consumption of the facility.

Remove abandoned cables While this may seem to be a logical given, it is not unusual to see unused and abandoned cables cluttering the floors of data centres. Apart from being a general inconvenience and an eyesore, such stray cables raise more

“Although cabling represents a fraction of the overall cost of the data centre, given its lifetime in comparison to active equipment, it is no doubt the most complex and costly component to replace." and uplink connections, for easy identification. Leading cable vendors offer a high degree of customisation and both naming and colour coding can be easily adopted by selecting the right vendor partner.

Preterminated fibre and copper cabling systems The modularity of pre-terminated cabling solutions in the data centre enables fast installation times and swift upgrades, which together reduce the total cost of network ownership. The plug-andplay nature of these systems mean that installation does not require skilled specialists. Rapid modifications are also possible as these systems lend themselves perfectly to moves and changes. Also, the high-density design of these

serious concerns, such as restricting airflow thereby reducing cooling efficiency. They also present a fire hazard. Removal of unused cables, although is therefore a must. Although cabling represents a fraction of the overall cost of the data centre, given its lifetime in comparison to active equipment, it is no doubt the most complex and costly component to replace. By utilising the tips mentioned above, data centre administrators can reap the benefits of simplified cable management, ease of troubleshooting, reduced cost of operation due to better air circulation, and, best of all, better return on invest though the maximum utilisation of the physical infrastructure even as active components go through their never-ending upgrade cycles. 2013 CABLING PLANNER 13

ad v ertorial Fluke networks

New Fluke Networks family of certification tools improve profitability for cable installers

F Cabling mistakes, complexity and rework are resulting in average losses of over



luke Networks has announced its new family of Versiv Cable Certification Testers, designed to help data communications installers achieve system acceptance for copper and fiber jobs quicker, and more accurately and profitably. Versiv is a powerful platform offering interchangeable modules for copper, fiber and Optical Time Domain Reflectometer (OTDR) testing, as well as new software innovations that speed test time and accuracy, and simplify testing setup, planning and reporting. In a global study of cabling professionals, mistakes, complexity and rework are adding more than a week of labour to a typical 1,000 cabling drop installation, resulting in average losses of more than $2,500. To combat these growing challenges, Versiv has been built from the ground up to go beyond testing and troubleshooting to address the entire certification lifecycle. Its new capabilities help contractors manage the complexities of today’s certification landscape, and reduce errors that can threaten profitability. Key to simplifying the complexity is the new ProjX management system. In addition to allowing team leaders to set up test parameters to work across multiple jobs and media, the system accelerates planning and setup of projects by allowing technicians to capture consistent test parameters across an entire job, or switch from job to job by simply clicking between projects stored in the tester. The system also allows up-to-the-minute project analysis and oversight to help speed certification and reporting. If problems are

encountered during the testing process, technicians can create a “Fix Later” troubleshooting to-do list for later evaluation by more experienced installers. Versiv also features an intuitive and instructive touch screen interface that elevates the capabilities of the less experienced installers, and increases the speed of testing and global ISO Level V testing compliance. From wizards that speed set up to the advanced Taptive user interface for navigation to new workflow enhancements, all of the new features in Versiv combine to make it the fastest tester on the market, so jobs get done right the first time. “When doing cabling installation and certification, the difference between having a job be profitable versus a loss, is often times just a few percentage points,” said Jason Wilbur, Vice President and General Manager of the datacom cabling instillation business unit at Fluke Networks. “In 2004, we defined the certification market with the introduction of our industry leading tester, the DTX, which was focused on certification testing speed. But today’s challenges have changed and our customers must improve their agility and reduce errors when working across multiple mediums, codes and projects. The Versiv family is razor focused on helping our contractors profitably manage the complexities that are now part of their new normal.” Fluke Networks is the world-leading provider of network test and monitoring solutions to speed the deployment and improve the

“In 2004, we defined the certification market with the introduction of our industry leading tester, the DTX, which was focused on certification testing speed. But today’s challenges have changed and our customers must improve their agility and reduce errors when working across multiple mediums, codes and projects. The Versiv family is razor focused on helping our contractors profitably manage the complexities that are now part of their new normal.” performance of networks and applications. Leading enterprises and service providers trust Fluke Networks’ products and expertise to help solve today’s toughest issues and emerging challenges in WLAN security, mobility, unified communications and data centres. Based in Everett, Wash., the company distributes products in more than 50 countries. For more information, visit www. or call +1 (425) 446-4519. For additional information, promotions and updates, follow Fluke Networks on Twitter (@FlukeNetDCI), on Facebook, or on the LinkedIn Company Page.



In the driver’s seat To understand 10 Gigabit Ethernet, 40GE, and even 100GE, it would be essential to understand technology drivers that are causing manufacturers and network users to adopt them, says Asef Baddar, Senior Technical Manager, Network Solutions, Leviton.




oupled with technology drivers, we see trends in the infrastructure and the use of containerised data centres (DCs) to achieve rapid deployment. Containerised DCs are used as a complementary and not a replacement, also optimised for homogenous environment. Increased density for specialised environments and the need for more connections require more space allocation, which is addressed in TIA-942 standard. New trends in the infrastructure include POD build-out strategy to ensure modular design, proper cooling options such as cold/hot isle containment in row cooling, and cooled cabinets. Of course, more port connection means more cabling and therefore, better containment to handle all the cabling from fibre and copper solutions.

DC owners are increasing their focus on overall DC appearance whether when utilised for tours as an enterprise centre, or to sell a service if the owner is a co-location or hosting environment and cloud centre. Owners are well versed and have increased focus on the infrastructure not to have any limiting designs, and they understand that there are always new technologies always. Therefore, requirements are put forth to address migration strategies.

The future standard for high speed: IEEE 802.3ba The new IEEE standard was ratified in the summer of 2010 and addresses requirements for 40/100G using parallel optics. Few 40G or 100G are being used today as equipment manufacturers have it available, such as Cisco, Brocade, Juniper, and Extreme. The majority of application is for 10G network speeds connecting the EDA to HDA and 40G or 100G from the HDA to the MDA core/aggregation. Research by Infonetics in April 2011 shows that by 2015, 100G will account for 25% of network ports by passing 40G. The move to 40G or 100G in the infrastructure is easy with almost no downtime. The key for that to happen is to have the proper DC design from day one. Full utilisation of a fibre solution must be done to enable smooth migration to 40G or 100G in the future. This makes it easy for DC owners to make the right decision when going to active and have a purely electronics decision. A 12 fiber MTP to MTP link, which comes with higher performance and a low-loss MPO, can provide up to six duplex 10Gb/s

By 2015, 100G will account for


of network ports by passing 40G.

Asef Baddar, Senior Technical Manager, Network Solutions, Leviton transmit and receive. Migrating to 40GBASESR4 will require a minimum of eight OM3 or OM4 fibers, four fibres to transmit, and four fibres to receive. However, migrating to 100GBASE-SR10 will require a minimum of 20 OM3 or OM4 fibres, 10 fibres to transmit, and 10 fibres to receive.

The migration strategy With much of the industry skipping 40G and going directly to 100G, I think 40G will have a lifecycle, but I do think it will be brief, simply as a reflection of overall product life cycles and rapid technical advances as we have seen over the past five years. To understand this, we look back to the advent of 10G networks.  For a number of years, everyone spoke of 10G network speeds. We simply did not see this actually take place until recently, and even now the majority of server I/O operates at below 10G speeds. In my opinion, the main reason for the delay in implementing 10G was not from a lack of wanting from the client side, but was mainly caused by the lack of available products.  Couple into this the lack of installed infrastructure that could actually reliably provide 10G performance, and we see that it was not the clients who kept this technology from being deployed, but it was really an industry issue. Simply put, in today’s DC we see 10G as the prevalent server I/O max speeds (network). These I/O ports are cabled to

some form of distributed aggregation switch, such as a ToR environment. The need for larger pipes and higher speeds occur between these aggregation points (switches) and the associated aggregation or core switches.  In today’s DC this larger pipe takes the form of 40G Parallel Optics connectivity.  From the Core environment to the outside world is where we see the current use of 100G connectivity — such as ISP networks and campus environments. The 40G SR4 technology uses a total of eight fibres per channel.  100G SR10 technology uses a total of 20 fibres.  This fact alone may be a significant barrier for 100G implementation between layer 2 and aggregation/core switches. The cost for the infrastructure would nearly double from a cable perspective.  The need is not currently here for 100G transmission in this segment, and the equipment (switch) manufacturers do not have the required array of products available at the current time. From a migration aspect, there is currently in working session within IEEE the emerging standard of “4X25”.  This new standard is proposing the use of eight strands for 100G capability (25G lanes).  This proposed standard will be able to be implemented both on MM as well as SM.  Within the MM arena, the latest news is that distance will be reduced to 20Meters for OM3 and 100Meters for OM4.  I believe that this is why the new TIA 942-A standard has solidified its position of OM3 as a minimum and OM4 as the recommended medium for DC applications. So today, if a client is refreshing or building a new DC, they will more than likely utilise active switching. This will require 40G connectivity between layer 2 and aggregation/Core switches. They have the medium (and the ratified) standard that can be implemented (OM3 or OM4), the active switch gear that can exploit this medium and operate at 40G, and the actual need for this bandwidth due to aggregation strategies. Looking ahead, IEEE should have the 4x25 standard emerging over the next two years or so, which will enable this same client to upgrade or refresh the active switches which will then be capable of implementing the new 4x25 technology for 100G I/O. 2013 CABLING PLANNER 17

O p inion Fluke networks

Overshadowed by hype A network’s physical layer may not be a hyped subject, but those that ignore its evolution will suffer, says Jason Wilbur, VP and GM, Datacom Installer Business Unit, Fluke Networks.


oday’s IT discussions are filled with terms like cloud, virtualisation, SANs, BYOD, SaaS and SLAs. Rarely is the physical layer — Layer 1 of the 7-Layer OSI Model — part of the buzz. But at the end of the day, all network technologies lead back to that critical, foundational layer, and the cabling infrastructure that supports it. If it doesn’t work, nothing works. Like the technologies around it, infrastructure is changing. Consultants and network owners that do not embrace this change, and address the mounting complexities of installation and certification, will struggle for profitability and the very survival as a business. Data centres, and the networks that fan off from them, settled into a fairly archetypal design right around the year 2000, and haven’t altered dramatically since then. The number one challenge for cabling contractors is speed of certification. But, the networking industry has been in stasis for the past decade due to the effectiveness of 1G copper connections. These cables were common, inexpensive, fast enough, and relatively straightforward to install and repeatedly test. But, that era is coming to a close as we move from 1G copper to 10G copper, and to 40G and even 100G fibre. As more data travels over each connection, each cable is that much more critical. Unfortunately, this evolution is complicated by a variety of issues and standards, and, simultaneously, the people who are responsible for deploying and maintaining this infrastructure —



the cable installers, project managers, network administrators, etc. — are wrestling with limited resources. Not only is complexity increasing, but the volume of cable installation and certification is still high. According to surveys done by Fluke Networks, nearly 93 percent of contractors expect to certify the same (59 percent) or a higher volume (34 percent) of links next year. Testing and certification are key requirements for these installations, and not just for the obvious need to make sure that everything works. Certification reports are required to obtain system acceptance that in turn leads to payment and compliance with manufacturers’ warranties. Yet, because of the volume of work and the scarcity of resources, roaming install and test teams, and separate service tiers, are common. Almost 90 percent of these links are typically fixed individually and immediately, meaning that if a tool or expertise is not available, work stalls. Adding to the complexity, installations are not problem-free. In recent Fluke Networks’ customer surveys, 91 percent of US, 90 percent of Asian and 97 percent of European installers report at least one preventable problem that they had to deal with the past 30 days. More than half of the respondents from the Europe report six or more problems. More than 50% of US respondents actually reported seven or more preventable problems. In Asia, that climbs to 10 or more problems. Most often, these issues can be traced back to errors in process.

Jason Wilbur, VP and GM, Datacom Installer Business Unit, Fluke Networks Those problems add up, according to the research. In total, per 1,000 links installed, 45 hours (US), 61 hours (Asia) and 26 hours (Europe) are spent resolving mistakes during cable infrastructure installation and testing. In simple terms, the mistakes, complexity and rework can add between a week and a week-and-a-half of labour to a typical 1,000-link project. Right now, the industry is awash in “multiples” — multiple cables, multiple standards, multiple teams, multiple tools, multiple projects, multiple test regimes, multiple skill levels and more. That puts two opposing forces — increasing complexity and thinly stretched expertise — on a collision course that affects the fundamental connectivity of technology. The implication is that if something doesn’t change, then some other factor has to give. Clearly what’s needed is better efficiency and agility, and that means tools that can assume a larger role in the installation process, thereby delivering a greater impact to the business. As much as testing and troubleshooting are the core of certification, there is an even greater opportunity to wring time, cost, complexity and errors out of the rest of the process. Cabling infrastructure is evolving, fast. Everything else must evolve with it.

Elevating the enclosure to an art form. NEW

High density, customizable, and easy to use. Now that’s smart. Opt-X® fiber enclosures offer the perfect combination of sophistication and simplicity. Whether it’s the Opt-X Ultra® for data centers or the new NEW! Opt-X® High Density Enclosure System that maximizes port density within the same Opt-X footprint with Opt-X Ultra® HD and Opt-X 1000 i HD Enclosures. The new enclosures feature compact Opt-X Evolve cassettes and adapter plates that provide up to 50% more density.

THE FUTURE IS ON | T: +971 4 886 4722 | E: LMEINFO@LEVITON.COM ISO 9001:2000 registered quality manufacturer | © 2013 Leviton Manufacturing Co., Inc. All rights reserved.

Does your fibre system tick all the boxes?

LANmark-OF : Competitive Fibre Optic Solutions 40G


• Micro-Bundle cables save up to 50% trunk space • Slimflex cords offer 7,5mm bend radius saving 30% space in patching areas • Pre-terminated assemblies reduce installation time • MPO connectivity enables cost efficient migration to 40/100G

w ww.n ex an systems

LANmark-OF brings the best fibre technologies together to ensure maximum reliability and lowest operational cost.

OF brochure

Accelerate business at the speed of light

info. nc s @ ne x a ns . c om

Global expert in cables and cabling systems

Cabling Planner (June 2013)  
Cabling Planner (June 2013)