Page 1

LD E WOR L I B O M 7 SS 201 E R G N CO

The Mobile Network //

ALSO FEATURING Making sense of the world’s mobile networks

2 : PART



f o e r Futu orks netw


TURE C E T I RCH n EDGE A e latest actio Th




story side hi n i e h ss T Congre d l r o obile W

of M







ISSUE 17 // MWC 2017 ISSUE: PART 2 ////////////////////




We look at further developments happening at the edge, and near-edge, of the mobile network.

The automation of network operations and processes has already begun, but where is it heading?

Orange’s double-pronged attack on the IoT market explained, plus key LoRA player interviewed.



IoT Network

2017 1987 36

MWC History From a slow tour of European cities, via the French Riveria, MWC has come a long way in 30 years.


Networked World: Vehicles What is cellular’s role in the development of the connected and assisted car?


Anatomy of a Mobile Operator: Verizon Can Verizon stay ahead of the network pack?



Country Profile: Japan Japan’s operators are in the forefront of pushing 5G trials and specifications.


Emergency Comms Are LTE, and public LTE networks, a good match for emergency services networks?

















It has been thirty years since the formation of GSM, and the holding of an event called Pan European Digital Cellular Conference. That 1987 meeting of just 150 or so people turned into GSM World Congress, and from there into 3GSM World Congress and then Mobile World Congress.

NFV assurancE

Open Source NFVi

C-RAN fronthaul testing

Deploying the Mobile Edge

Automated test

Massive MIMO modelling



Impress your friends with knowledge of these wireless network up-and-comers.

There’s still no one agreed way for operators to deploy VoLTE roaming.

Market Tracker


In 2017, over 100,000 attendees of the current version of the event will meet at a critical time for mobile network development. There is a lot up for grabs. Will the industry deliver on its promise of transforming business models via 5G and technical transformation? Can it find the technology enablers to do that? Or, in the haste to be the first to 5G, will we see the industry mimic the business models it built around LTE — only with higher speeds — and sacrifice or delay the greater prize? The answers to these questions will not be found in many of the keynotes at MWC, where most speeches do not extend beyond the platitudinous. There will be slightly more enlightenment in some of the smaller sessions, and a whole lot more on the booths and stands of the technology companies tasked with guiding this transformation. Here, the most important question to ask is “why”. Why should we care about this virtualised slicing demo, or that Gbps LTE-A PRO demo, or this NFV orchestration Proof of Concept? What, in the end, is this piece of technology enabling? Will it be more of the same, but slightly better, or something genuinely different that will drive the industry through the next 30 years?

Commercial Director: Shahid Ramzan // Editorial Director: Keith Dyer // Creative Direction and Design: Francesca Tortora //


Keith Dyer

© 2017 TMN Communications Ltd.




The largest US operator has always competed on a basis of “network first”, leveraging its spectrum position to be the first to launch LTE and VoLTE across the country. With its leadership in LTE being eroded, the carrier is turning its attentions to virtualisation, the IoT and 5G.


Verizon has never been shy of investing in its network, and it has been even less shy of telling the world about those investments. It spent $5 billion during the first six months of 2017 alone, bringing its running total since 2000 to $111 billion. It continues to push into LTE-A technologies such as carrier aggregation and network densification, as well as extending its efforts in NFV and SDN-enabled services. It has also taken a prominent role in driving early 5G trials in the US.


Verizon’s focus on LTE, brought about by necessity to begin with as it consolidated former CDMAbased 3G networks, has resulted in more 4G LTE coverage than any other US operator. Verizon has 2.31 million square miles of LTE coverage covering more than 314 million of the U.S. population. The operator claims that AT&T has nearly 466,000 square miles less LTE coverage than Verizon; T-Mobile has nearly 776,000 square miles less coverage than Verizon; and Sprint has nearly 1.7 million square miles less coverage than Verizon. Verizon installed more than 1,500 new cell sites and added XLTE technology to more than 1,700 sites across the country from January to July, rolling out three channel carrier aggregation to bring 50 percent faster peak wireless data speeds to more than 288 million people in 461 cities from coast to coast.

Recently, however, this position of dominance has been narrowed by intense investments by T-Mobile, in particular, and AT&T. However, it may be a while before this shows up in official benchmarking tests. According to RootMetrics, Verizon won or tied for first place in 48 of 50 states in overall network performance in 2016’s State RootScore reports. In the six network award categories — overall, reliability, speed, data, call and text performance — Verizon’s network clearly led, winning or sharing first place in 274 of 300 state-level award categories. The company is adding further capabilities indoors by working with the likes of SpiderCloud Wireless and Cisco to extend coverage for large buildings. It will also aggregate unlicensed spectrum. Verizon Wireless said it will test LTE in the 5GHz band using the indoor small cell solution developed by SpiderCloud Wireless. The Silicon Valley company is developing a new radio for LTE-U, and trials are expected to start in the third quarter of this year. SpiderCloud said its LTE-U solution will be commercially available starting in early 2017, and CEO Mike Gallagher said he is highly confident that the Federal Communications Commission will not create regulatory hurdles to the technology. As well as the SpiderCloud option, Verizon’s 4G LTE Network Extender for Enterprise uses Samsung’s small cell technology to deliver a 4G LTE solution for mid-sized locations (10,000 to 100,000 square feet). A single unit supports approximately 42 concurrent users and about 31,500 square feet.



Closes merger with MCI in deal valued at $8.5 billion.


Rural Cellular for $2.7 billion.


Acquires Alltell for $5.9 billion, making it the largest provider in the USA by subscriber numbers.



PERSONNEL RONAN DUNNE Executive Vice President and Group President, Verizon Wireless

Buys Advanced Wireless Spectrum (AWS) licenses from SpectrumCo — a joint venture of Comcast, Time Warner Cable and Bright House Networks — and from Cox TMI Wireless. February Acquires Hughes Telematics.

Verizon uses different types of drones for network performance

February Acquires Vodafone’s 45% share in Verizon Wireless.

2014 June Announces intention to acquire AOL. February Adds XO Communications’ fibre networks. March Verizon and Hearst form a joint venture — Verizon Hearst Media Partners — to develop digital video programming. July Announces that it will acquire Yahoo’s operating business for approximately $4.83 billion in cash. Deal still pending and, some say, cast into doubt by news of Yahoo’s data breaches. September Sensity Systems, to add additional smart city solutions to its ThingSpace Internet of Things platform. November Fleetmatics, a leading global provider of fleet and mobile workforce management solutions for $2.4 billion.



TAMI ERWIN Executive Vice President, Wireless Operations ROGER GURNANI Chief Information and Technology Architect: His role includes network and technology planning, development of architecture and roadmaps, continued evolution of digital platforms and oversight and direction for the CIO and CTO teams MARNI M. WALDEN Executive Vice President and President of Product Innovation and New Businesses BRIAN MECUM Vice President, Network, Verizon Wireless ADAM KOEPPE Vice President of Network Planning SHAWN HAKL Vice President of Networking & Innovation ED CHAN Senior Vice President, Technology Strategy & Planning GAGAN PURANIK Director, Network Strategic Sourcing, Verizon




114.25 million


92.5 million



Verizon has often taken its own initiative on 5G, for example forming its own 5G Technology Forum, within which partners are working on testing the characteristics of 5G technology, and deploying trial field networks. Verizon’s 5G Technology Forum partners consist of companies from all areas of the telecommunications. The operator is also in an alliance with other operators to accelerate 5G deployment. SK Telecom was amongst operators that recently signed a Memorandum of Understanding (MoU) to merge their 5G technical specifications on a global basis as KT, NTT DOCOMO, SK Telecom and Verizon agreed to form a new global initiative, called the 5G Open Trial Specification Alliance. This alliance plans to develop an aligned 5G trial specification that would serve as a common platform for different 5G trials around the world. The Alliance has attracted some criticism for introducing a risk of a split in standards, and forking globally aligned efforts within 3GPP. The 5G Open Trial Specification Alliance will focus on 5G radio interface trial activities and aims to provide the wireless industry with the ability to test and validate key technical components. Coordination is already underway, with technical trials occurring in the 2016-2018 timeframe. The first fruits are emerging. Verizon said it has completed its first 5G radio specifications, as it progresses a “common and extendable platform for Verizon’s 28/39 GHz fixed wireless access trials and deployments”. The carrier said it is currently in pre-commercial trials for the “fibre via wireless” application of “5G” — whereby it seeks to use “5G” technologies to provide very high throughput fixed wireless access to buildings and businesses. Verizon added that it will have commercial deployment of this use case in 2017.


“The completion of the 5G radio specification is a key milestone toward the development of a complete 5G specification,” said Adam Koeppe, Vice President Network Technology Planning, who is leading the 5G trial efforts. “The level of collaboration that we are seeing exceeds what we saw during 4G. This agile way of developing the specification and working with the ecosystem will enable us to get to market rapidly.” Verizon said it has validated a range of 5G technology enablers, such as wide bandwidth operation of several hundred MHz in size, multiple antenna array processing, and carrier aggregation capabilities that are substantially different from 4G. It has tested propagation and penetration of millimeter wave systems in the field, studying line-ofsight and non-line-of-sight performance, and propagation modeling using barriers such as structures and foliage, all based on real-world fixed wireless applications. The 5G technical specifications outline a 5G radio interface composed of Layers 1, 2 and 3, and defines the interfaces between the User Equipment (UE) and the network.



$4.4 billion AMOUNT PAID FOR AOL


$89.2 billion



243 million 4Q 2016 IoT REVENUES

$11.25 billion WIRELESS CAPEX 2016





VS Q4 2015



CLOUD An early launch as part of its SDN and NFV strategy saw Verizon Enterprise Solutions launch Virtual Network Services in 2016. The new service enables enterprises to transition to a virtual infrastructure model, providing greater agility and on-demand resources. Verizon will offer three models to clients for deploying virtualised services including: premisesbased universal customer premises equipment (CPE), cloud-based virtual CPE services and hybrid services where clients can mix premises-based and cloud-based deployment models to meet their individual business and technical requirements. Verizon’s initial Virtual Network Service packages are: Security, WAN Optimisation, and SD WAN services. With the introduction of cloud-based services, Verizon is being aggressive in demanding that vendors adapt to a new way of pricing and delivering technology. Shawn Hakl (see personnel panel, page 7) outlined an NFV-SDN shopping list that included the demand that vendors first of all accept that x86 based architectures do work and do scale.

“Don’t claim it doesn’t — it works, deal with it”. Also, vendors must deal with interoperability amongst themselves. “Sort this stuff out before showing up at my door”, he said. He also said that if you claim to support something (say like the TOSCA orchestration language) then support it natively — don’t ask for a translator. Hakl said: “It’s like if I was asked to speak at this conference in Swahili and said, ‘Sure I speak Swahili, as long as you pay for an English to Swahili translator’. So don’t say you support it and then expect me to pay for the translation. Or if you do, be prepared for me to own the IP and for your deployment to be non-replicable. Sorry — if I sound like I am on a diatribe then that’s because I am, it’s the number one area I don’t feel like paying for and yet people keep coming to me and asking me to do so.” He added that vendors must be prepared to break software licenses into functions that customers want to buy, and that “can be integrated with other people’s stuff and into Pay As You Go usage based models”.



G N I R O T I N S O K M R G O N I W T E N D I ENABL R B Y H G N I G R IN EME In virtual networks, there is the same need to obtain deep network and customer insights as there is in physical networks.

Network Functions Virtualisation (NFV) has moved beyond theory and is now being deployed in earnest by every major operator globally. Some of these initiatives reside within dedicated transformation projects, such as Vodafone’s OCEAN programme, AT&T’s Domain2.0, eCOMP initiative and Telefonica’s UNICA project. Other operators are already deploying enterprise and consumer services over virtualised elements, including NTT DoCoMo, SK Telekom, Verizon, Deutsche Telekom and Orange. Solution vendors and software developers are also well advanced in delivering Virtual Network Functions (VNF) and NFV infrastructure (VNFi). Typically, we expect that the first step towards fully virtualised networks will be the deployment of virtual EPC and IMS elements, serving use cases such as vCPE, enterprise VPNs and VoLTE. Virtualised Radio Access Network (RAN) will take a little longer to be deployed in the network, and vendors and operators are still working on

“Operators are making it clear that any technical solution from this point onward must be designed in the light of a requirement to work within a virtualised environment.”


defining exactly where in the RAN technology stack the physical-virtual split is best achieved. All this is evidence that the migration efforts being undertaken by the industry, across the vendor and operator community, is moving forward at pace. Where there is still much work to do is in the definition of exactly how NFV elements and VNFs will be operated and managed. Development has moved to exploring how VNFs from different vendors can be on-boarded onto the network, to ensuring inter-vendor interoperability, and to defining how VNFs can operate in a truly cloud-native manner. Over the past year this has led to the adoption of Open Source development of the software that will be used to orchestrate and manage NFV elements. As if to illustrate the pace of progress, in early 2017 ETSI held an NFV interoperability Plugtest in Spain, with 29 remote labs connected to verify interconnection between 20 VNFs, 10 management and orchestration solutions and 10 NFV platforms. The impact this development has had within operators has been profound. Across their technical landscape and their operating infrastructure, operators are making it clear that any technical solution from this point onward must be designed in the light of a requirement to work within a virtualised environment. That said, the process of transitioning from physical to virtual infrastructure will necessarily be gradual, given that


a ‘rip and replace’ strategy is not a viable option for most CSPs. It is clear therefore that, although the degree of virtualisation will increase over time, most networks will be hybrids, in that they will contain both physical and virtual elements. This is a situation that will last for at least another ten years, according to analysts.

MONITORING EMERGING HYBRID NETWORKS One key question surrounding NFV adoption is how operators will monitor, manage and provide service assurance within networks that are hybrids of legacy dedicated physical elements and virtualised ones. Although operator requirements are for vendors/suppliers to align with virtualisation programmes, they must also ensure the operational fidelity of legacy elements. Importantly the operational performance monitoring, service quality monitoring and the customer experience output that they promise during this migration period remains critical. Operators must be able to provide exceptional customer experience, derive KPI analytics no matter the architecture that is deployed, and be able to consolidate a customer view across different network and service domains, including between legacy and NFV elements. This creates a number of possibilities in terms of managing this environment. There could be

“Operators must be able to provide exceptional customer experience, and derive KPI analytics, no matter the architecture that is deployed.” physical components monitored by classical physical equipment or by virtualised monitoring solutions, which can be deployed directly into the operator’s data centre or cloud. At the same time, virtual components could be connected in some way to physical monitoring elements and, of course, there could also be fully virtualised components monitored with fully virtualised network probes. Whatever the environment, what used to be physical interfaces will in NFV become virtual interfaces, but the interfaces themselves are still the same as specified by 3GPP and other Standards Development Organisations. That means that the monitoring capability across those interfaces needs to remain the same. Polystar sees that this creates another key consideration for those working to advance NFV — that it is vital that vendors and operators collaborate to define how they will monitor and manage customer experience as they enter uncharted territory. At the moment, operators are trialling and testing how this can

best be done, with the support of Polystar’s experience in network probe and analytics platforms. There is still no clear or agreed view on how monitoring and assurance will be defined within virtualised and hybrid physical-virtual networks, but what we do know is that monitoring of NFV and hybrid networks will be vital as there will always be a fundamental need to optimise and personalise the user experience based on a refined and granular view of network and subscriber data across all domains and infrastructure. Polystar’s knowledge base, expertise and field experience makes us the ideal partner for operators to explore this new paradigm. Polystar is working closely with operators and technology partners now in order to gain the experience and knowledge required to take on the challenge of providing vendor-independent monitoring capability in emerging hybrid networks. It’s a key challenge for the future evolution of networks. Why not join us in meeting this challenge?

To join us in this work, talk to us at MWC, or get in touch to find out more:

MWC 2017 | HALL 6, STAND 6G31



In Part II of this two part look at the evolution of edge computing in the mobile network, TMN looks at recent developments within operators and vendors to drive adoption. It is apparent that MEC is very much in its infancy in terms of operator deployments. TMN’s survey of vendors and operators found many trials and Proofs of Concept, but few live deployments to speak of. An exception is in the area of traffic optimisation and intelligence. Here, one of the earlier companies to identify the edge of the network as the place where intelligent traffic optimisation decisions can be made, Vasona Networks, has been making strides. John Reister, VP Marketing and Product, says that Vasona’s edge app controller — SmartAir — is now deployed on over 100,000 cell sectors in the Americas and Europe. But it is the applications emerging from Vasona’s presence near the edge that is engaging Reister the most. “I think in terms of what’s emerging — I’m really excited on the app side around mobile throughput guidance. With Smart Guided Video Rate we are now starting to get traction with an IETF


draft and the GSMA is in the process of producing an informational package and description for operators to start implementing the technology.” Vasona says that the technology has achieved a 20% reduction in “stall time” over a trial period of 200,000 video sessions. Vasona is different to some companies in that is doesn’t sit at the extreme edge of the network, but rather a level back — giving it an aggregated view of clusters of cell sites. It can then make decisions based on a view of hundreds of sites, giving it the ability to understand the conditions of a cell a user is handing over into, as well as the current cell ID. In fact, one frustration for Vasona to date has been that because its technology is labelled MEC, it must be a base station technology. “We had an operator say, ‘We love what you are doing but we are not a fan of MEC because we don’t believe in compute resources at every eNodeB — as we have tens of thousands of sites it would be so expensive’. It’s so frustrating that there’s this misperception that MEC means you have to deploy all the MEC platform and infrastructure at every single eNodeB. Where we deploy generally one installation covers in the order of 300-500 eNodeB’s and 3-8,000 cell sector carriers. That’s much more cost effective and scales better for the operator.”


Another frustration for Reister is the applications to the MEC environment. analytics to deliver very applicable focus there has been on the virtualised Earlier in 2016 Nokia and EE public safety solutions.” infrastructure and the MEC platform demonstrated the use of MEC for The Finnish vendor has also opened to date. managing crowd safety and reporting up an App Factory to drive enterprise “I do think people focus too much incidents at major events. On-site uptake of low latency apps enabled on the platform and not enough on the security personnel wore LTE-connected by MEC. applications. The platform is important bodycams, generating multiple video Nokia itself has three enterprisefor a degree of consistency but it is the streams back to a central control room. specific MEC applications: apps that drive the business case and Ultra-low latency enabled the control • Object tracking to allow the deployment. Whereas NFV is basically room staff to provide immediate tracking of assets and personnel about reducing the cost of assets and complete situational awareness. to centimeter-level accuracy. and infrastructure, moving compute Also in 2016, China Mobile out towards the RAN requires new implemented a large scale • Video surveillance extended from investment so it has to be an upside implementation of MEC coupled with the operations room to mobile business case.” Nokia’s Edge Video Orchestration (EVO) devices, allowing security personnel One other company that has been application. This solution to access any feed reliably at any at the forefront of the “apps first” time, wherever they are. approach to MEC has been Nokia. “It’s so frustrating that • Video analytics, using Mobile Edge Right since its Liquid Apps days Computing to analyse data feeds and the launch of its server-on-athere’s this misperception from security cameras, alerting base-station technology five years staff to investigate irregular ago, Nokia has pushed the localised that MEC means you have activity immediately. nature of MEC with a series of apps to deploy all the MEC aimed at road traffic, local marketing Nokia will also provide an AppFactory and enterprise services. environment for the creation platform and infrastructure That work continues, however, of applications to meet the specific and is in many cases still at the proof needs of enterprises and will support at every single eNodeB.” of concept or field trial stage. For the integration of existing enterprise example, in 2017 Vodafone Hutchison Australia (VHA) and Nokia aim to carry out trials of a proof of concept to demonstrate how mobile VIP lounge Media room edge computing powered by Live feeds 4G networks can improve public safety. Nokia AirScale* Local production Nokia AirScale The trial will look at how Wi-Fi AP Wi-Fi controller video analytics can be used (cloud) Nokia Flexi Zone* LTE small cells to process data feeds from Server room video cameras, connected over a 4G network in realtime. VHA’s General Manager for Technology Strategy, Nokia Flexi Zone Intel Xeon Transport EPC Internet Easwaren Siva, said, “What controller processor-based we are seeing is innovations MEC Server such as NFV, deployment of mass sensors from the IoT, low latency 4G and NOKIA’S MEC ARCHITECTURE, CONNECTING LIVE VIDEO FEEDS WITH 5G networks via near USERS VIA AN MEC SERVER AND SMALL CELL CLUSTER CONTROLLER. edge computing, and video



Another MEC pioneer company, Saguna Networks, has taken its development a stage further by collaborating with Wind River to validate its Open-RAN MEC solution on Wind River’s Titanium Server. The aim is to make integration of Saguna’s MEC platform easier by providing its as pre-tested and validated with Wind Rivers’ NFV infrastructure. Saguna sees this move as a sign that momentum is growing in MEC. “In the past year, Mobile Edge Computing has been picking up momentum in trials and proof-of-concepts worldwide. Saguna is dedicated to accelerating adoption and mass-market deployment of MEC by helping to build an extensive ecosystem of pre-integrated MEC solutions,” says Randy Cook, VP Sales and Business Development. “Our work with Wind River brings together the MEC platform and underlying NFV infrastructure needed for large-scale MEC projects.”

allowed spectators to access multiple real-time high-definition video feeds on their mobile devices with close to zero latency, using an ultra-dense heterogeneous network (HetNet) of LTE/Wi-Fi capable small cells. China Mobile has also been working with Ericsson on edge-centric technology. Although Ericsson has never used the term MEC for its edgebased approach, previously stating that it does not see value in at-thebase-station computing, its work with China Mobile on drones is MEC in all but name. In the trial, held in Wuxi in China’s Jiangsu province, a drone was flown using the operator’s cellular network with 5G-enabled technologies and with handovers across multiple sites. In order to demonstrate the concept’s validity in a real-world setting, the handovers were performed between sites that were simultaneously in use by commercial mobile phone users. The two companies have been collaborating in the China National Key 5G Project since the beginning of 2016, focusing on user-centric 5G network architecture evolution. One of the project’s aims is to optimise latency for mission-critical use cases, by dynamically deploying part of a network through distributed cloud close to the radio edge. The drone trial is therefore a step toward 5G networks in which part of a network can be distributed and dynamically deployed at the cellular edge in

“What we are seeing is the coming together of various technical innovations such as NFV, deployment of mass sensors, low latency 4G MEASURING MEC METRICS — REDUCING ROUND TRIP TIMES


and 5G networks via near edge computing.”


“Our progress to date and the variety of proof-points is a testament to the reality of MEC. We see MEC as the enabler of ultra low latency that will be integral to 5G, but which is already available today using LTE.” order to reduce end-to-end latency, and to serve a range of 5G use cases at the same time. Staying in China, Artesyn Embedded Technologies’ MaxCore platform has been selected by China Unicom Network Technology Research Institute and Baicells to demonstrate a new mobile edge computing (MEC) virtual reality (VR) live video solution using drone technology for 5G networks. China Unicom and Baicells’ joint research and development solution fuses together a panoramic video collage algorithm, a panoramic video transmission protocol, MEC architecture, and an LTE/5G data channel quality of service (QoS) guarantee mechanism. The panoramic video collage algorithm and transmission protocol ensure the VR video is seamless, while the MEC architecture brings the processing technology closer to the user for low latency with the aim of achieving interference-free, high-speed transmission of the video data. The unmanned aerial vehicle (UAV) or drone in the demonstration features 360 degree panoramic highdefinition cameras. The user can enter the panoramic video and manipulate it to achieve an unprecedented immersive live VR experience. Enabling the low transmission latency and seamless panoramic live HD VR

broadcast is an MEC architecture gateway powered by Artesyn’s MaxCore acceleration platform. The solution could be applied not only to concerts and sporting events but also to public safety, emergency communication and drone inspection. One of the central proving grounds for MEC applications has been ETSI, whose MEC ISG has been formulating specifications for the technology. The standards organisation has acted as an umbrella group for a series of demonstrations of MEC applications, running a showcase in late 2016 of a range of Proofs of Concept. These included: • R  AVEN Radio Aware Video Optimisation in a Fully Virtualised Network (Telecom Italia • Intel UK Corporation • Eurecom • Politecnico di Torino) • F  LIPS Flexible IP-based Services (InterDigital • Bristol is Open • Intracom • CVTC • Essex University) • E  nterprise Services (Saguna • ADVA Optical Networking • Bezeq International) • Healthcare Dynamic Hospital User, IoT and Alert Status Management (Quortus Ltd • Argela • Turk Telecom) • M  ulti-Service MEC Platform for Advanced Service Delivery (Brocade • Gigaspaces • Advantech • Saguna • Vasona • Vodafone) • Video Analytics (Nokia • SeeTec • Vodafone Hutchison Australia)

Certainly, ETSI hopes that such showcases will continue to drive investment in applications within the MEC environment, whether for 4G or 5G. Nurit Sprecher, ETSI MEC ISG chairperson, says, “Our progress to date and the variety of proof-points is a testament to the reality of MEC. We see MEC as the enabler of ultra low latency that will be integral to 5G, but which is already available today using LTE and our MEC platform. MEC delivers capabilities that have never been seen before, such as the ability to process and analyse information at point-ofcapture, at the very edge of the network that increases awareness and accelerates reaction times.” Another industry association addressing edge computing, albeit one defining a slightly different edge, is the OpenFog Consortium. Its OpenFog Reference Architecture, released in February 2017 introduces a universal technical framework designed to enable the dataintensive requirements of 5G, IoT and AI. The Consortium was founded in 2016 to accelerate adoption of fog computing through an open, interoperable architecture. Helder Antunes, chairman of the OpenFog Consortium, says, “While fog computing is starting to be rolled out in smart cities, connected cars, drones and more, it needs a common, interoperable platform to turbocharge the tremendous opportunity in digital transformation. The new OpenFog Reference Architecture is an important giant step in that direction.” The deployment of edge computing within the network is still in its infancy, but in traffic optimisation, live video analytics and other low latency applications, we are beginning to see where operators are placing their bets for this technology.

Can Multi access Edge Computing be the driver for a new class of network-based applications? Join the conversation

#TMNtalkingpoint Contact:



Seven things I know about…


Rui Frazao, CTO, Vasona Networks, says multiaccess edge computing (MEC) applications can open up a new way to prioritise network investment, improve customer experience and provide predictable, reliable, and better network performance.

The edge doesn’t have to mean right at the very edge There is a perception that MEC necessitates deploying resources right out at the base station or even small cell sites, but that is not the case. Instead, the edge can be a hop back in the network at aggregation points that sit in between the base station and the centralised core. With a near-edge location, operators avoid the costs and operational complexity of deploying and managing new hardware and software at every cell site, and the associated challenges of providing extra space and power. Plus they avoid incurring the latencies and increased costs of sending traffic back and forth from the base station to the core. Additionally, there are economies of scale and scope by having a unified view of clusters of sites, so that operators can manage user mobility and hundreds of sectors from one vantage point, making decisions in a coordinated manner and enriching services and applications by giving them insight into cell conditions, with tracking through cell handovers. 16 TMNQUARTERLY


The mobile edge should be about applications, not infrastructure

There has been a focus to date on the infrastructure and architecture of MEC elements, but the real business drivers are the applications that can be delivered from an MEC platform. Operators recognise that some apps and intelligence being built into the core of the network are not sufficient today. Besides the need for lower latency, a multitude of apps also require better, more granular, visibility into cell site radio network performance, and not just a view of average performance across a whole network. Taking this application-first approach to the mobile edge will, in turn, give operators greater control of the customer experience.



MEC can help shift the focus to the customer

We need a greater focus on the use cases that are most impactful to customer experiences. MEC use cases must address a specific business need. MEC is about finding the right apps that are relevant to the customer experience and that can be best delivered from a compute site near the edge of the network. The edge platform brings the necessary flexibility to the network’s infrastructure to adjust to the required customer experience.

Operators can deliver next-gen benefits on existing networks MEC can deliver lower system latencies. As lower latencies are a key requirement of 5G, there has been a tendency to equate the two and associate MEC with 5G. But operators don’t have to wait for 5G to benefit from applications that are enabled by platforms within the mobile edge.

Delivering on QoE-focused KPIs with MEC is a breakthrough for planning Throwing money into capacity for insatiable traffic demands is unsustainable. That’s why new strategies focus investments to achieve measurable customer impact. MEC solutions are a great example of this. Case in point is our SmartGVR. It creates improvements in customers’ actual quality of experience. It shifts the conversation from low-layer RAN metrics to QoEfocussed KPIs, like video stalls and start times and browsing latency. And operators can now think about how much QoE improvement they are delivering for each investment made. We call it getting to the optimal Quality on Investment.

The edge provides the foundation to help operators meet key challenges  today

For example, our Smart Guided Video Rate (SmartGVR) (that’s Vasona’s solution for the MEC use case called mobile throughput guidance) application is already delivering improvements in quality of experience and network efficiency for a large, Tier One mobile operator and major content provider. In this deployment, SmartGVR manages hundreds of thousands of video streams, achieving sizeable reductions in rebuffering events and video stall and start time measurements.

Video continues to be a huge topic and the main source of data traffic bogging down networks. This is not just about OTT video - carriers like AT&T are publicly saying they intend to move video distribution purely onto mobile networks. There’s also a big increase in the growth of streamed live video, with a number of more professional content providers starting to test that on mobile networks.

We also have a solution to enable a low-latency IoT application within 4G networks today, using our edge platform to cut current latencies in half.

Smart platforms sited at the mobile edge can provide more reliable communications, giving the ability to track or enhance reliability for more mission-critical or live applications by being able to distinguish between traffic categories.

On top of that, we can add predictability and reliability by being able to distinguish between traffic sessions and ensure each has the network capabilities they require. This means we can drive deterministic KPIs into the network so that users and services receive the relevant performance they require from the network in a predictable manner.

The time for the edge app platform has arrived, providing improvements to 3G and 4G customer experiences and enabling new services, all while forming key building blocks as operators move towards 5G.

#4 changes the MEC investment case MEC should be viewed through a different lens. It is not a burdensome hardware investment that serves only a narrow subset of 5G use cases, to be set aside solely for as-yet unrealised virtual reality and IoT use cases. It is much more valuable than that. In fact, deployed in the right way to drive improved customer experience-related KPIs, MEC investments can extend existing RAN investments, and delay further capex required to meet traffic increases.

To see more from Vasona Networks, including a solution demo, visit




Our last feature laid out the landscape for the radio networks of IoT: from cellular and non-cellular approaches to the requirement to operate with both modes in the network. This feature looks at how operators are deploying. Particularly, we look at Orange’s strategy as it provides a useful example of an operator following a multimode approach. Part II of our look at the Orange’s Arnaud Vamparys, who heads network of the Internet up radio network strategy at Group of Things looks at some level, says that commercial IoT network Orange’s approach is to deploy LoRA deployment strategies. for long range and, in Europe, look to deploy LTE-M for use cases where it requires secure data up to 1Mbps. “From our analysis right now we are for Europe really in favour of LTE-M, because it is the one that in addition to 2G M2M and LoRa covers the larger number of use cases,” he says. He sees NB-IoT as a much later starter, and although other operators hail NBIoT’s greater manageability, and ability to withstand interference, he says that Orange’s experience with LoRa wide area radio is leading it to think it can easily mitigate any performance worries due to interference within unlicensed bands. “NB-IOT use cases are closer to what we have commercially with LoRa. We are waiting to see a clear advantage perhaps on some indoor use cases.” Crucially and most importantly — Orange needs to see “availability of some real sensors in that technology” he says — contrasting the current ecosystem with the ready availability of LoRa sensors. In areas where there is no LTE coverage he does see another R13 cellular IoT standard — EC-GSM (Extended Coverage for GSM) being



deployed. For Orange this mainly means its African markets. In the USA, Sigfox has signed over two dozen U.S. channel partners to enable IoT connectivity in key verticals including logistics, asset management and agriculture, and claims to now have coverage in 100 cities — and end of 2016 target it set itself in mid-2015. In France, SFR/Altice signed a deal to resell Sigfox’s connectivity in the country. That means that of the four mainstream French operators, three have now committed to using noncellular technology to hook up IoT devices, Orange and Bouygues having already committed to services using LoRa-based networks. The operator will use Sigfox to tackle markets that are not a good match for LTE or 2/3G-based M2M — usually those apps requiring only low power, low bandwidth access. SFR’s then CEO Michel Combes said that the reason for choosing Sigfox was that it offered “the best existing brick with an offer that is already deployed”. Note that he said Sigfox is the best fit, but not the perfect fit. Importantly, though, he said that Sigfox is already deployed. Why Sigfox over LoRa? “For its seamless continuity of service,” Combes said. Sigfox claims to already have 92% population coverage in France, and its message has always been that it sees itself not as a competitor to cellular IoT, but as a complement. Moreover, the situation in France

“NB-IOT use cases are closer to what we have commercially with LoRa. We are waiting to see a clear advantage perhaps on some indoor use cases.”

contrasts with the markets in Germany and the UK, where the major operators have not made matching selections. Sigfox’s entry into the UK market has been via a partnership with network infrastructure owner Arqiva. Arqiva has been responsible for signing partners, none of them so far has been a licensed spectrum holder. Vodafone, a key operator in both the UK and Germany, was very much a leading player in developing NBIOT tech - both in partnership with its supplier Huawei, and as a founder member and chair of the NB-IOT Forum which coalesced to push its specifications into the 3GPP. Vodafone was a hard backer of cellular-based IoT because it sees NB-IOT and its 2G equivalent EC-GSM as offering more control and better performance. Although that put it behind operators availing themselves of LoRa and Sigfox, they are not too far back. Indeed, the operator turned on its first commercial NB-IoT network in Spain in January 2017. The operator said, “Valencia and Madrid are the first cities in Spain with NB-IoT. As part of a nationwide roll-out, we will extend coverage to Barcelona, Bilbao, Málaga and Seville by the end of March 2017, when there will be over 1,000 mobile sites supporting NB-IoT. Every NB-IoT mobile site can connect more than 100,000 devices. “Our Spanish NB-IoT network was deployed within existing 800 MHz spectrum. That is the optimal use of Vodafone Spain’s 4G spectrum and will maximise the signal strength and coverage.” “To launch NB-IoT we just needed to update the software in existing base stations. In Valencia that took

just a few hours, which was really fast

when you consider that it can take up to a year to add a new 3G or 4G mobile site to a mobile network.” That drive to NB-IoT may be impacting on the providers of products in unlicensed technology. Mike Mulica, CEO of Actility — the provider of Orange’s LoRa network — says the company will also become less associated with LoRa technology, and will become more cross-platform in its approach. Mulica said that the company has become known as a key player in LoRa technology only because LoRa has been the first LPWA technology



“Most customers will

to market. In time the do LoRa and LTE-M. company will move away That requires a highly from being seen as solely efficient platform a LoRa shop. “We provide operating in an management systems automated way.” for connected objects, and our view was of a software stack capable of managing billions of things that might, in some instances, only involve one transaction a day or an hour. That’s very different from existing [cellular network] management systems. “LoRa happens to be something we discovered early with Semtech and IBM, and pursued as a connectivity path. But we are also supporting LTE-M as that emerges and so the story is we are all about LPWA. LoRa is first in the market with technology that connects things in a super efficient way and we think LTE-M will complement that well. In addition to that is a very easy on-ramping for app communities that support Zigbee and Bluetooth — so think of us as radio agnostic. One way to get inserted into the market is via the acceptance of LoRa and then [drive] the introduction into additional radio protocols.” “We obviously don’t build radio but have implemented an MVNE module that is specified in 3GPP, adopting the 3GPP architecture for management, that allow us to bring in LTE-M sensors, connect with them and manage them from our platform in same way we manage LORA sensors today.”

Mulica said that fits in with customer strategy to have a consolidated, horizontal management layer that interfaces with different radio technologies. “Most of our customers have a similar strategy. It’s now clear that different use cases will be supported by different technologies, for example LoRa has a 5-10x benefit over LTE-M in terms of battery life under certain transaction conditions. If you have to put a bigger battery in it costs 2-3x more — and you are talking about multiplying that cost by millions of sensors. So LoRa is orders of magnitude cheaper. But as you go up to higher bandwidths, especially for video, then LTE-M comes into play. “Most customers will do LoRa and LTE-M. That requires a highly efficient platform operating in an automated way, given how different this is from managing Part I of our two part and on-boarding an iphone. Sensors are super low cost look at the network and with such thin margins you require incredible efficiency. of the Internet In that LPWA sweet spot where you may have a billion of Things looks unpredictable transactions, current management at the capabilities systems will not support that kind of environment. required in the radio Our goal is to be there first, network. You can gain confidence that our software is well suited for read it in Issue 16 that challenge and then grow into different device of TMN Quarterly. and radio environments.”

Will it be easy or challenging to manage different device ecosystems from the same management platform? Join the conversation


#TMNtalkingpoint Contact:





# Legacy

TALK TO US: H A L L 6 , S T A N D 6 G 3 1






Seven things I know about…


Luc-Yves Pagal Vinette, Global Business Development Manager, Kontron, Communications Business, shares what he knows about how operators can transition to “open services” using NFV infrastructure and software defined networking (SDN).

Consensus and standardisation comes from new (Open) sources The development of SDN and NFV changed the nature of the telco network ecosystem and introduced requirements for cloud infrastructure and services management that previously did not exist in telco standards. This has meant that traditional networking and telco standards bodies have struggled to drive consensus within the market to deliver solutions that can act as the foundation of SDN-NFV. OpenStack, and the different distributions of it that exist, has instead achieved consensus that it can deliver a cloud foundation that gives operators the possibility to leverage the benefits of virtualisation. In a sense we have seen the use of Open Source technology replace the previous standardisation process.


But Open Source must mean really Open

This of course means that Open Source must truly mean Open. There cannot be proprietary tweaks or forks that limit the upgrade path. Any iteration must by definition support continuous integration so that operators have confidence in this Open Source environment. This is one reason why Kontron commits to Canonical Ubuntu - we wanted to bring to market the first turnkey Open Source solutions that are not limited by the implementation choices of the OpenStack distribution provider, so that our customers can rest assured that they can upgrade the platform. Open must also mean that NFV infrastructure (NFVi) can support stackable Virtual Network Functions (VNFs) from any vendor. We foster an environment where any vendor can integrate software on top of the Virtual Machine (VM) and hardware. As we have no interest in being a software vendor and have no position to defend, we are not building a pre-defined ecosystem tied to the platform that cannot grow.




Underlying hardware must be able to support telco use cases

It’s a software world but hardware is important. A telco cloud platform must be able to support the high volume functions that we find in the telecoms space, such as core networking, with extreme availability and reliability. Therefore our SymKloud platform supports something that we define as an Open Services Platform — a set of multiple service cards on the same device, each maintaining a set of resources separate from one to another. They can work together but they can also be separated, increasing scalability to support core functions. By using OpenStack we can tap into the elasticity of virtualisation whilst bringing the hardware scalability that is needed by telecom actors for networking functions.

The orchestration layer must have industry consensus We are still waiting to see true consensus form around Open Source based MANO initiatives such as ETSI Open Source MANO. Platform developers must be able to support generic NFVi MANO plus be able to extend out to more collaborative SDN-NFV MANO efforts. Anything in this area, including carrierled programmes, needs consensus. In future releases we could work with OpenStack Tacker to support a generic VNF Manager (VNFM) and NFV Orchestrator (NFVO) for our NFVi OpenStack platform. We can also actively collaborate to support other vendors’ SDN/NFV orchestration service layer.

vCPE will be an initial driver A lot of the intitial virtualised workloads in the carrier space will be for vCPE, as this is the area where carriers have identified fertile commercial managed services opportunities. We see the potential for a whitebox product in this space that provides a scaleable approach to balance the cost of supporting vCPE with support for open VNFs, in order to mitigate the cost of investment for vCPE while maintaining scalability and elasticity on the datacentre. We will have a demo of this at Mobile World Congress. An additional opportunity we see is to enhance our native support for media and content distribution and transcoding, leveraging our OpenStack NFVi to leverage virtualised GPUs, bringing new value for media companies and telcos tapping media services delivery.

Open means open horizontally as well as vertically One capability that OpenStack provides is the capability to accelerate support for certain services by tapping into different hardware resources. Via the 6Wind Virtual Accelerator (Canonical Charm) we can accelerate the processing of any applications by tapping extremely quickly into VMs from different hardware platforms horizontally, using resources from separated hardware within a different environment. It means you can bypass the management section when accessing the VM, achieving close to line rate throughput but keeping the cloud features that are enabled by the app provider and management layers of OpenStack.

Kontron is evolving to provide a truly open alternative Kontron now provides a ready-to-buy platform, pre-configured together with OpenStack, that is truly Open Source, offering full support to new releases and features of OpenStack. It’s not up to us to tell operators or integrators which VNFs you can deploy, or that you can only cascade certain VNFs together. We have made no behind the curtain deals and the Symkloud MS29xx Series are totally open platforms, with true hardware-software separation. We remain committed to supporting this open approach. OPNFV member CENGN has our MS2910 SymKloud with Ubuntu package in interoperability testing. Additionally, our relationship with strategic investor Ennoconn gives us the opportunity to complement SymKloud platforms with an extended range of products such as top-of-rack switches and storage servers.

Experience Kontron’s Open Services SymKloud platform with Ubuntu OpenStack:

MWC 2017 | HALL 5, STAND 5H41



Machine learning and AI are the latest buzzwords in the tech world right now. As mobile operators think about where and how they can automate their networks and service operations, this two part feature looks at how automation might play out across mobile networks. In Part II of Automation Generation we look in more detail at how technologies enabling automation are being developed in the network.



Automation in the network can refer to a whole host of processes. At one level it can mean something as seemingly simple as automating an alarm if a certain quality threshold is breached in the network. Going further, you see the network being able to take action on that alarm, without the need for human intervention. In SON (Self Organising Networks) process are automated to tweak radio parameters dependent on factors such as interference from neighbouring cells, or to tweak coverage to where it is most required. Closed loop and centralised SON can carry these optimisations out across the network without any need for human intervention. Distributed SON algorithms work tighter to the network edge, perhaps just at the radio baseband unit itself. Companies such as Viavi Solutions, TEOCO and more with strong SONbased products have been harbingers of automation in the network, often tying in customer and subscriber data with network state data direct from base station elements. This has created specific use cases within network optimisation that can do things such as tweak capacities per time of day. A Heavy Reading paper commissioned by TEOCO in 2016 said that the holy grail of closed loop automation sees the service assurance system not only coalesce and analyse data but also pinpoint the network and service issues and then makes recommendations. These recommendations are passed on either to the NFV orchestration or the legacy physical side of the network to the service fulfilment systems. “Closed loop automation thereby reduces the mean time to repair for the network. Such closed loop or automated resolution can use performance measurement data, fault data and customer experience measurements to identify problems before they have a significant impact on the service level. These problems can then be fixed


by issuing requirements to the MANO or a traditional EMS, with a minimum of human intervention. This automation should enable the network to be self healing in the short term, but also ensure a more highly optimised network over the long term.” These have been the initial steps for automation within the mobile network, but now companies such as these see the potential to stretch automation further out in terms of its use cases in the mobile network. Principally, this is because there is now recognition that if mobile operators are to be able to meet their business transformation goals, they can only do so if there is an attendant network transformation. The goal of the business transformation is to be able to change the way services are introduced into the network, up to and including designing a network that can let customers fully provision and dynamically reconfigure their own service parameters. The bigger picture is about the lifecycle of a service deployment being automated — from provisioning — the actual definition

“These have been the initial steps for automation within the mobile network, but now companies see the potential to stretch automation further out.”

“A key requirement is that the service assurance systems are kept up to date with the state of the network in near realtime. This represents a significant increase in the amount of data the service assurance system has to store and analyse.”

of the service in network terms — to fulfilment and then orchestration of the network becoming automated. As Netrounds’ Marcus Friman writes in this issue on page 28, “Operators want to make the whole process of introducing services across a network much faster, shortening the development, launch, verification and test cycle from months to days. They also need to be able to change services and deal with changes in the network in an active manner. This will not only enable them to launch, change and upgrade services faster, but will reduce the risk profile of introducing new services, so that no service becomes ‘too big to fail’.” In turn this transformation relies on the adoption of more agile and iterative processes — including the adoption of the so-called DevOps approach. In short, DevOps introduces a far more active way of working, where a given service is introduced into the network without having been through the strict gating procedure of an operators’ testing workflow. Instead, active testing carried out while the service is deployed, and then in operation, defines further development on the service. This introduces a big cultural shift, for sure, into MNO operations departments. But it also introduces a host of technical demands — introducing the requirement to understand the performance of the virtualised

infrastructure, and to test any changes the customer makes to their service. You can only do that with an automated active test methodology that exists either inside or alongside the NFV infrastructure itself. Friman says, “If you take the example of the enterprise VPN that the customer self-provisions, requiring resources in a dynamic from VNFs across the network, it will not be possible to send a technician with a test box to the customer site to test the performance and outcome of each configuration change. That’s the way it’s done today and that’s fine as it stands because changes are relatively rare — but in the new environment where changes are likely to be much more frequent and granular, that’s not sustainable.” Accordingly, companies such as Netrounds and Accedian market active test agents that sit in the network and can be programmed to send automated test flows across a network, so that analytics engines can be fed realtime network data from a live network. Companies such as Rohde & Schwarz, Polystar, Astellia, NetScout and Procera insert probes into the network to carry out network (L2/3) and application (L4-7) layer analysis, and these companies too have become increasingly aligned to the virtualisation of network elements, in many cases pre-empting adoption of NFV infrastructure in the network TMNQUARTERLY 25


by offering probes in software that can be deployed as virtualised instances. All of this data can be used as source data for greater automation of network processes and service management. Netrounds, for instance, will be integrated with NEC and Netcracker’s orchestration and network monitoring solutions to allow CSPs to deploy an active solution that covers the entire service lifecycle in an automated, DevOps-based and operationally efficient way.

Yet this sort of approach, necessary for automated operations, will create a large analytical burden. Heavy Reading again: “A key requirement is that the service assurance systems are kept up to date with the state of the network in near realtime. This represents a significant increase in the amount of data the service assurance system has to store and analyse.” So as well as discovery, automation relies on the adoption of far greater

“In the future, these cuttingedge technologies will give customers completely new possibilities... the operator, on the other hand, will receive tools that allow realtime adaptation to meet the customer needs...”

analytical capabilities in the network, both to understand how the network itself is working but also how, say, an IoT application designed within a healthcare environment, is receiving its desired service levels. That’s the driver for the bigger picture of automation that sees machine learning take over from manual crunching of such data, uncovering patterns and being able to provide recommendations that can optimise a network or prevent issues arising in the future. That has seen the rise of the big data player within the mobile network, including start-ups such as Cardinality. Here’s what Cardinality reports one of its customers, a Tier 1 mobile network operator, is achieving: “This customer, in one of the largest big data projects in Europe, is processing over one million events per minute on both the control and user plane, 4-5 terabytes of compressed data (32 to 40 terabytes uncompressed) and 35 to 40 billion rows of data per day. Use Cases include subscriber driven coverage analysis including location and tariff correlation, and per device analytics to evaluate performance and troubleshoot issues.”

THE AUTOMATED OPERATOR A meeting of operators in mid2016, held by Nokia to look at issues surrounding automation identified several key concerns. Several said that they would like common automation tools that could be used across the network — rather than being applied to just specific sectors of the network. That might require common data modelling to provide a standardised approach to automation. It also found that network engineers need to be upskilled for new roles that exist in a Service Operations Centre, rather than a mere Network



“While ECOMP provides complete automation of the entire lifecycle of a VNF , it’s simpler to think of ECOMP as the operating system for developers to build network apps around.”

Operations Sector. They agreed that cultural and organisational issues are as important as technical milestones. So far, industry efforts to drive automation in a codified way have been most publicly led by AT&T, which has, from a carrier point of view, taken the lead in terms of outlining a platform for network and service automation with its Enhanced Control, Orchestration, Management and Policy (E-COMP). AT&T has been in discussions with a number of operators to join the ECOMP ecosystem. Orange and Bell Canada are testing ECOMP in their networks — most recently Orange announced it was doing so in Poland in association with Amdocs, who has been contributing to ECOMP since 2015. AT&T has moved ECOMP to a different footing by putting its software into Open Source framework for more development. AT&T’s Chris Rice, SVP of AT&T Labs, Domain 2.0 Architecture and Design says, “ECOMP provides

the necessary automation platform that enabled us to achieve aggressive virtualisation goals across enterprise, infrastructure, mobility and consumer use cases. “We achieved the necessary performance, capital spending reductions and efficiency we expected as we moved to a softwaredefined network. With more than two years of production experience, it is ready for external real-world applications. “That success drove the creation of this Linux Foundation project and community, leading to the availability of an open source platform derived from ECOMP.” Rice says, “While ECOMP provides complete automation of the entire lifecycle of a VNF within an SDN environment, it’s simpler to think of ECOMP as the operating system for developers to build network apps around. “It offers service providers the ability to design and operate “softwarecentric” networks running on virtual machines rather than on traditional, physical network architectures. Such networks provide improved scalability and extensive automation, and can adapt faster to customer needs with the quick addition and removal of features. By deploying ECOMP, service providers will gain more control of their network services, drive down operational costs and allow developers to more easily innovate with new network services. Ultimately, consumers benefit because the network will be able to better adapt and scale to meet their needs, as well as predict how to make their connected experiences seamless.”

Piotr Muszyński, Orange Polska Vice-President in charge of Strategy and Transformation, says, “Virtualisation of the network is an inevitable process. By testing ECOMP at Orange Polska, we are preparing ourselves to become a softwaredriven company. In the future, these cutting-edge technologies will give customers completely new possibilities, such as the ability to self-activate and deactivate services, or to enjoy flexible rating, based on the time they consumed the service. The operator, on the other hand, will receive tools that allow real-time adaptation to meet the customer needs.” Orange Polska is conducting numerous tests with Amdocs to support the preparation and creation of a set of virtual Customer Premises Equipment (vCPE) services for residential customers. A significant part of the vCPE features will be moved to a cloud environment and managed by the ECOMP platform. The tests will validate the capabilities and potential benefits of an ECOMP deployment within the Orange network.

Do approaches like ECOMP risk fragmenting standardised approaches such as OS MANO? Join the conversation

#TMNtalkingpoint Contact:



Seven things I know about…

AUTOMATED NETWORK TESTING Operators want increased agility

Netrounds’ Marcus Friman says that network automation designed to support new business and operational models necessitates the deployment of automated active testing.

Operators know they move too slowly to introduce service innovation and that this threatens their future viability. They want to make the whole process of introducing new service offerings across a network much faster, shortening the development, launch, verification and test cycle from months to days. They also need to be able to update and alter services on the fly and deal with changes in the network in an active manner. This will not only enable them to launch, change and upgrade services faster, but will reduce the risk profile of introducing new services, so that no service becomes “too big to fail”. Increased agility will be critical for operators in maintaining their competitive edge in this evolving marketplace. This is the end goal.

2 What is DevOps? »» D  evOps is a way of continually updating and managing services in a live network. »» It entails a constant deploy, test, reiterate method of working. »» It enables service providers to be much more flexible and faster to market with services that work correctly the first time. »» It requires automated, integrated testing to assure and handle interactive changes in real time.


Programmable networks introduce new complexity

We are seeing operators move to a network environment that relies on Software Defined Networks and Network Functions Virtualisation for the flexible deployment of network resources and services. This new network structure introduces increased levels of complexity as it brings with it shared, virtualised resources and potentially a number of new niche technology vendors, yet without the vendor lock-in experienced by operators today. That increased complexity brings with it the concurrent requirement to also be able to test the outcome of changes in the network as they happen, and in an automated manner. How can you otherwise make sure that the service is working for the customer as expected?



An innovative culture = DevOps = Test Automation

Being truly agile is more aligned with the way Internet players have operated than with traditional telco business models. It requires a combination of new technology in the network with a new DevOps-like approach that allows for continuous integration. Have you ever heard about DevOps and agility without automated testing being an integral part of it?

Test Automation is about more than just the lab and infrastructure components For an Over-The-Top (OTT) player, the key resources are the servers in the datacenter, and potentially the Content Distribution Network. In such an OTT deployment, the network is not a part of the service in the same way as for a telecom service. For a telco it’s different — here the resourcing of the network is a key part of the service. In order to guarantee a high-quality service to the end customer, testing the results of subtle changes in the network necessitates having an end-to-end view of the service as it is experienced by the user. The industry has been very focused on the infrastructure components that will support this new innovation culture, but it is also important to get good realtime data from an end-to-end perspective. As well as using this data for active test and assurance, this data can then be fed into automated control systems, like Performance Management or Service Quality Management systems, enhancing the self-learning capabilities of the network and improving customer experience.

Introducing Orchestrated Assurance Orchestrated Assurance includes activation testing, active monitoring and troubleshooting as part of the orchestration and fulfillment loop. When modelling how to configure a network service, you must also include how the service should be tested and monitored in the model. Adding these capabilities to the service model allows the orchestrator to accurately control and assure the customers are experiencing the service quality that they expect in a completely automated manner. This would be essentially impossible to do in a hardwaredefined environment with manual tools and human intervention. Take the example of an enterprise VPN that the customer self-provisions, requiring resources in a dynamic fashion from VNFs across the network. It will not be sustainable to send a technician with a test box to the customer site to test the performance and outcome of each configuration change.

The active testing component Netrounds enables wider network automation and intelligence by providing an active testing component in software that can act as a virtual user in the network to test services end-to-end from the user perspective. Test Agents can be distributed throughout the network and the operator can automatically trigger tests via a centralised test controller API (called Netrounds Control Center), making it possible to test different aspects of a service in remote locations. The Test Agents can send test traffic through service chains to give an end-to-end view of a service: in the NFV world there will not only be network services but managed services with different value-added functionality, such as firewalls, included. When we start talking 5G and network slicing, those slices will all have different requirements that need enhanced visibility and management as well. The Control Center coordinates the Test Agents and collects aggregated test results and KPIs. It also acts as the central management platform where service-specific and automated test sequences are designed. When orchestrators make a configuration change in the network, tests will automatically be triggered via APIs such as NETCONF and YANG.

The new model, enabled by active, automated test It’s still early days, but we know already that the new operating model that carriers hope to benefit from will be infinitely more complex with new network capabilities that will rely on increased programmability and automation in the network. We know it will be critical to support that dynamic network with an associated, automated test regime that itself is programmable via open APIs, and can be managed from within the operators’ existing orchestration environment. For every change in the network, automated or manual, there should be an equal and automated test to ensure that updated or altered services are delivered right the first time. This will be a key underpinning to the move to DevOps and more agile service introduction.

Netrounds is a programmable, active test and service assurance platform provider for physical, hybrid and virtual networks. Netrounds’ automation capabilities enable communication service providers to reduce manual efforts required for network testing and service assurance. TMNQUARTERLY 29



Mobile network technology is dominated by its major equipment manufacturers, but there is still room for innovation to come from other areas — some of it academic, some backed by operators themselves. In the second part of a two-part feature, The Mobile Network gives you some

KUMU NETWORKS Kumu Networks has attracted a lot of attention from the wireless industry since it first spoke of having cracked full duplex radio communications. In its time it has gathered attention from operators and from other vendors, and then in early 2016 the company secured a $25 million funding round from Cisco and Deutsche Telekom, amongst others. Full-duplex wireless is a system that can transmit and receive on the same frequency at the same time, doubling spectral efficiency. Kumu’s solution is based on software-controlled selfinterference-cancellation technology that eliminates the need for FDD or TDD duplexing. It has potential as one of the enabling radio technologies within 5G standards, as the technology can be used to improve spectrum reuse and remove limitations of hardwarebased filters in existing systems. Full duplex has rather dropped off the list of enabling technologies for 5G, but vendors confirm that it is still considered part of the toolkit, and TMN believes Kumu could be about to unveil news of commercialisation of Full Duplex with Verizon, Telecom Italia, SKT, Deutsche Telekom and Telstra.


names from the network industry that may be about to become rather better known.

KORIIST Koriist says it overcomes one of the main issues with accessing data in remote areas: connectivity. It says to do that it connects disparate communications networks with a software-defined, multi-protocol, data and network router overlay. Its software product, Stitch, is virtualisation software that acts as a transport protocolto-protocol converter, giving access to information regardless of where the data is produced or what wires or waves are used for transmission. Application pairings such as analog radio voice and VOIP, legacy electric grid and renewable power, and coaxial security cameras and IP-enabled surveillance equipment can be made to interoperate without having to reengineer the infrastructure or replace existing hardware or software. Could Koriist be onto something with its vision of connecting disparate information pools, and thereby delivering data from old devices to modern tools?


SAMSARA Samsara was founded in early 2015 by Sanjit Biswas and John Bicket, who both previously founded and led wireless networking pioneer Meraki, which was formed out of their research at MIT. (Meraki was acquired by Cisco Systems in 2012 for $1.2 billion). The company integrates cellular and wireless gateways with sensors for industrial, vehicular and other applications. The aim is to combine ease of deployment and use with a cloud-based platform that mirrors the capabilities of Merako. For example, its Industrial IoT Gateway includes pre-provisioned cellular data, connecting automatically to Samsara’s cloudhosted platform. Samsara streams sensor data out-of-the-box with no networks to deploy, VPNs to configure, or software to provision, providing web-based front ends for analytics and management. Adoption of IoT services within industry verticals is a hot area, and Samsara is bringing a Silicon Valley approach to the market that mobile network operators would do well to take notice of.

ALTIOSTAR In existence for a few years now, Altiostar has attracted a lot of attention for its vision of a vRAN (Virtual RAN). Its latest reference came in Italy, where Telecom Italia Mobile tested the technology with a server at its innovation laboratories in Turin and antennas in the field 60km away, connected over Ethernet fronthaul. The architecture makes it possible to provide LTE-A functionalities such as Carrier Aggregation and CoMP inter-site, using a centralised and virtualised infrastructure. This type of architecture reduces the size and power consumption of radio equipment, which is located close to antennas and connected over Ethernet to signal processing units. In short, devices can be centralised in a remote location and exist on virtual servers rather than dedicated boxes.

CORE NETWORK DYNAMICS Core Network Dynamics was founded as an independent company in 2013, having started life as a project in 2008 within Fraunhofer FOKUS. It provides an eNodeB and core network-in-a-box type solution known as OpenEPC, which was originally designed for use within test-labs and research departments. Now the company is looking to commercialise the capability to meet core network demands within space and powerconstrained environments. One area it wants to exploit is in cloud edge deployments made in support of IoT connectivity, where centralised core solutions cannot provide the required scale or low latencies. CND’s OpenEPC was also used in a demonstration by 5G-PPP project 5G-Crosshaul, showing how a long-range mmWave mesh was able to multiplex packet based fronthaul and backhaul traffic, with its eNodeB software used to implement a new fronthaul protocol optimised for 5G.



BAICELLS Baicells produces fixed wireles, and mobile small cells. In 2016 it released what it it said was the first LAA small cell onto the market. The company built the product around Intel’s Transcede SoC, as it did with the LTE-U product. Previous products from the company — such as the Elf cell, a tiny small cell, have used Qualcomm’s FS90xx chips. The company claims to already have deployments of its LTE small cells in the pipeline with China Mobile. Baicells is also a member of the MulteFire Alliance, so is clearly committed to enabling LTE in unlicensed spectrum. The company is also a member of the Facebook-founded Telecoms Infrastructure Project.

NURAN WIRELESS NuRAN Wireless specialises in providing mobile and broadband wireless solutions where low lifecycle costs are paramount. Its GSM, LTE, and White Space radio access network (RAN) and backhaul products are designed to dramatically drop the total cost of ownership, creating new business models for operators and service providers to deploy networks. Often this means its GSM Litecell and Superfemto RAN are deployed in remote or hard to access areas. The company provides a variety of specialised systems for indoor coverage, rural connectivity in emerging markets, and connectivity to offshore platforms and ships. These can be deployed as private mobile networks or custom solutions for specific markets such as Internet of Thing (IoT) and emergency communications.


AFFIRMED NETWORKS While it is not exactly a start-up, Affirmed is rising in prominence as a company enabling NFV transformations in the service provider environment. Affirmed’s virtualised Mobile Core solutions provide a complete, consolidated Evolved Packet Core (EPC) solution and the company now has more than 50 customers including AT&T, Vodafone and Telus. Affirmed sees the IoT as a key driver for the implementation of virtualised core network solutions, and says it is planning to disclose products, partnerships and strategies for helping operators provide enterprises with compelling news IoT services during 2017’s Mobile World Congress. One customer deployment saw the vEPC used by Vodafone to deliver M2M communications and Connected Car services over the operator’s network. T-2 in Slovenia is using the company’s virtualised Mobile Content Cloud solution to support delivery of quad-play services over its LTE network.



M87 If other companies on this page target virtualised edge deployments, M87 goes even further by using smartphone processing power to flip network intelligence from the centre of the network to the mobile device itself. M87 software gives the devices themselves control over connectivity options - continuously seeking to improve connectivity and throughput speeds by monitoring the network environment and intelligently selecting the fastest path — either directly to the cellular network or via a device-todevice network. Although it sounds like a loss of control for mobile operators, M87 says one benefit can be minimising cell edge interference. M87’s technology was invented by Vidur Bhargava and Dr. Sriram Vishwanath in the ECE department at  the University of Texas at Austin.

Veniam equips buses, taxis and other vehicles with on-board units with multinetwork capabilities that can provide vehicle-tovehicle as well as vehicle-to-infrastructure communication using Wi-Fi and cellular. In Singapore, StarHub and the National University of Singapore (NUS), in collaboration with ComfortDelGro Bus and Veniam, have a deployment of Singapore’s first mesh network of vehicles on a university campus. Unlike traditional wireless network infrastructure, the mesh platform leverages vehicles as mobile Wi-Fi access points that connect to one another and to fixed points in buildings throughout the campus. The idea is that authorities can build networks of connected vehicles and offer fully managed services to wireless coverage in a city via WiFi and 4G/LTE. Worth keeping an eye on.

Part I of this The technology creates dynamic device-to-device mesh networks, using software to bridge and route between different radio technologies to allow devices like smartphones to connect with other devices nearby — whether or not they’re connected to the Internet via Wi-Fi or cellular. This in turn allows devices to share information, like location or text messages, with one another and even help each other find the easiest path to the best network connection. The company claims its Edge Network expands network capacity for wireless carriers without adding physical infrastructure. Device-to-device mesh technology has been proposed time and again, from companies such as Qualcomm to Veniam on this very page. Sooner or later, someone is going to crack it. Could it be M87?

article, published in TMN Quarterly issue 16, featured: Zeetta Networks Teragence PureLiFi Pivotal Communications KodaCloud Ambeent Wireless



A VISION FOR OPTICAL FRONTHAUL FOR 5G Yvon Rouault, Senior Product Manager, EXFO, tells TMN why operators need to get smarter about C-RAN fronthaul testing as they prepare for 5G.

YVON, WHY IS THE C-RAN ARCHITECTURE OF IMPORTANCE TO 5G? Fundamentally, because Cloud-RAN will enable the delivery of every use case that 5G promises, from massive broadband to realtime IoT, from very dense urban to rural deployments, to ultra-reliability and availability. Delivering all these requirements over one physical network will require network slicing, where each virtual slice has its own attributes served by the same hardware. That will only work if a slice can be delivered end-to-end across the network, including the RAN and Cloud-RAN which will be a key enabler for that The second reason is that 5G will require close and granular control of the radio layer, and Cloud-RAN enables close co-ordination and control of the radios serving the remote radio heads. Already in LTE-A PRO, if you take a feature like CoMP in 3GPP13, there must be a latency of less than 5ms between BBUs, via the MME, to achieve the tight co-ordination required. That’s much easier if the BBUs are co-located, or even sharing the same physical infrastructure as virtualised instances. A third reason is cost savings. There are far more RAN nodes in a network than EPC or IMS elements. Being able to virtualise and consolidate RAN centers will give operators far more economies than vEPC and vIMS transformations.


SO WHAT ARE THE CHALLENGING TECHNICAL REQUIREMENTS THAT CENTRALISED-RAN (C-RAN) AND CLOUD-RAN CREATE? The first is the transport speeds that CPRI requires on the interface between the Remote Radio Head (RRH) and the BBU. 3GPP specifies a 3ms round trip time between the handset and the BBU. Where the BBU is sited at the base of a tower is not so challenging, but if the BBUs are centralised many kilometres from the RRH, then it is far more constraining. In general, CPRI requires a rate 16 times that of the backhaul, so 2.4 Gbps CPRI means that 150 Mbps is required for LTE service. As LTE is promising 1Gbps throughput, that means you require 10Gbps CPRI rates. 5G will take that higher still. Although you may achieve some compressions to mitigate that requirement, our optical engineers know that beyond a 10km distance and above 10 Gbps throughput, optical fibers may be subject to degradations known as Chromatic Dispersion and Polarisation Mode Dispersion

“Cloud-RAN will enable the delivery of every use case that 5G promises, from massive broadband to realtime IoT.”

— where you get a very sporadic and unpredictable behaviour of the optical signal.

WHAT IS IMPACT OF VIRTUALISING THE RAN, INTRODUCING A PHYSICALVIRTUAL SPLIT AT LAYER 2/3? Splitting the protocol functions of the BBU may have the advantage of eventually placing the Layer 1 hardware back at the RRH at the cell site. Layer 2 and 3 MAC/RLC and protocol software for interfaces such as S1 and X2 would run back at the Cloud-RAN site on NFV infrastructure. That means you can run packet-based fronthaul over Ethernet. This greatly reduces the bit rate requirement across the fronthaul because you are no longer digitising the RF signal before its transmission through the BBU-RRH fronthaul link, which is what CPRI does. This sort of formulation is being studied now by IEEE (cf. IEEE P.1914.1). As well, the EU 5G‑PPP project known as XHaul is also evaluating several projects.

WHAT OPTICAL TECHNOLOGIES ARE AVAILABLE TO PROVIDE FRONTHAUL TRANSPORT? The biggest debate right now is on whether to deploy passive or active optical transport. Active optical means you retain the power of the signal but you introduce latencies because of the amplifiers you require along the path


Scan here to read a new white paper from EXFO: “The path to 5G requires a strong optical network: From C-RAN to Cloud-RAN”

to do that. Passive optical has much lower latencies but the power of the signal drops off in relation to distance and inserted passive elements along the path between BBU and RRH. Overall there is a trend to be more active than passive as operators would prefer to eliminate the signal power loss issue. But we have seen experiments with Passive Optical Networks for fronthaul, for example BT’s recent work. It really is a question of how much fibre you have available, and the density with which it is deployed.

SO WHAT ARE THE IMPLICATIONS OF ALL THIS IN TERMS OF TESTING AND VERIFICATION OF FRONTHAUL. The first thing is, operators must understand that optical fronthaul is not plug-and-play. Because radio departments dominate much of the deployment planning, there is an expectation that the optical element can be offloaded to another department within the operator, or a third party provider. But operators must be more aware of the impact of optical technologies on their overall network performance. We continue to evangelize this message to all operators globally. Secondly, to aid rollout, operators must be able to test RRH and Cloud or Centralised BBUs independently of each other. This means being able to simulate RRH to a BBU and vice versa. You must be able to test the RRHs

“Operators must understand that optical fronthaul is not plug-and-play.” and BBUs independently, and also consider the optical link that may be provided by a third party. This is true whether you are deploying CPRI or Ethernet-based fronthaul. Thirdly, operators need to be able to continually monitor what is going on to reduce truck rolls and the contracting of costly tower climbing specialists. This may mean introducing test agents onto live devices on the network that can send test messages across the network to continuously monitor performance. Fourth, operators must have a greater control over their process and methodology. We found one operator that had eight layers of subcontractors between an optical installation and the deployment team. That introduces far too many opportunities for error and lack of accountability. I fear that many currently underestimate the challenges of higher speed, higher rate CPRI, especially those teams from the radio network. For them it is a mindshift, and so we must see the transport teams integrated into the planning organisations and linked to the deployment organisations.

HOW DOES EXFO HELP THE MARKET MEET THESE TESTING REQUIREMENTS? Of course we have all the tools operators need for optical and Ethernet testing, both on-site, emulating BBU and RRHs and characterising optical performance. We can also provide ongoing monitoring on an active basis. But the big challenge is to use our expertise to evangelise best testing practices that operators need to adopt so that the early stage planning and rollout methodology teams understand what they require for the deployment of the Cloud-RAN. That means they can do the right testing and follow the proven Method of Procedures we have established right at the deployment phase. Operators that are aware of and adopt the correct test methodology will achieve the best results for their LTE-A PRO and 5G Cloud-RAN rollouts.

Meet with EXFO’s experts at Mobile World Congress 2017 MWC 2017 | HALL 6, STAND 6K36 TMNQUARTERLY 35


1987 Mobile World Congress eh?




As this year’s event marks the 30th anniversary of the very first conference, TMN takes a look back at the growth of this extraordinary event. The origins of the event we now know as Mobile World Congress started 30 years ago, with a tiny conference hosted in London. Called The Pan European Digital Cellular Conference, it attracted 150 attendees and had just a small table top exhibition. A little context. 1987 was also the year that fifteen European telecommunications operators signed a Memorandum of Understanding (MoU) to work together to develop something new and exciting called GSM technology. GSM was the digital cellular technology that would come to replace the analogue hotchpotch that had existed to date, and it was only in February of that year that European governments had agreed on GSM as the agreed standard. That 15-strong GSM MoU group evolved over time into the GSM Association which today has 800 members. And the Pan European Digital Cellular Conference mutated into Mobile World Congress, which in 2016 attracted over 100,000 visitors, and over 2,000 exhibitors. Back in 1987 it was far from clear that this would be the path that this technology, and this event, would follow. 36 TMNQUARTERLY


Indeed it was to be another three years before the Conference convened again in Rome, where a 1990 event was heavily supported and sponsored by Motorola. The following year the event had moved to Nice — by this time there were 650 attendees, and a whole six exhibitors. And so the event moved around, to Berlin in 1992, where D2’s George Schmidt toured the event handing out GSM badges where the GSM stood for God Send Mobiles. In 1993 the caravan rolled on to Lisbon… then Athens in 1994 where there were by now 17 exhibitors and nearly 800 attendees... then to Madrid... until in 1996 the first event was held in Cannes. At this point the event was still smallish in scale — with 1700 folk making it to the Riviera that year. The conference was mainly an exercise in comparing notes. The first speaker would nearly always be consultant Nigel Cawthorn who would update that year’s subscriber numbers, then there would be a host of “here’s how we did it” type presentations, or announcements of plans for the year ahead. Here are some of the titles of those early conference sessions: “Development of GSM Standards”, “Implementing GSM Phase II enhancements”, “The Radiation risks are they real?”, “The market for GSM value added services”. It was engineering-led, and roaming was a major technical challenge.

1994 But there was growth. By 1996 there were 167 GSM operators worldwide with a combined subscriber base of 50 million. It was also the year of the first prepaid SIM cards, an innovation that rapidly expanded the customer base in several markets. As the market grew, so did the event, and it was really in Cannes that it took hold. By 1997 there were 2,400 attendees and to be stuck inside the exhibition was to experience an unpleasant environment. The quarters in the basement of the Palais des Festivals were cramped, noisy and low-ceilinged — not ideal for a technology exhibition. Accordingly, the big companies carried out their business in chartered yachts that tied up to the quayside behind the Palais, decks loaded with champagne. They added to this by hiring grand marquees along the beachfront, and booking the ballrooms of the grand Croisette hotels for major parties featuring big name pop acts. Top level execs with the biggest expense accounts came to regard the Carlton cocktail bar as their playground. The companies leading the charge were Motorola, Lucent and Nortel from North America, Alcatel, Ericsson, Nokia and Siemens from Europe, and NEC, Sharp, Kyocera and Fujitsu from Japan and Asia. In 2001, to reflect the launch of 3G — the first 3G WCDMA (3GSM) network went live in that year — the event




INCIDENT In the mid-2000s Siemens decided that one way to solve the accommodation problem of Cannes was to hire a cruise liner. From this a legend, mostly true but part myth, arose. It put the full Siemens livery down the side and put most of its staff on the boat, mooring a few hundred metres off the beach. It seems excessive but think of the savings on hotels, and shuttling staff up and down the coast every day to satellite locations. And also… what a hospitality opportunity! It was this thinking that led Siemens to host a group of extremely high value operator execs for what was intended to be a short evening’s cruise along the beautiful coastline. Instead, the weather turned bad, and it was not safe for the tenders and launches that were used to ferry the execs from ship to shore to make the trip. Siemens had, in effect, taken a huge tranche of the senior levels of the mobile industry captive. Hours and hours, and many missed appointments later, the weather abated and the disgruntled CEOs and the rest were allowed to depart. Siemens did not renew the contract for the cruiseliner.

was renamed 3GSM World Congress. Despite the end of the DotCom bubble in 2000, 3GSM WC as an event was largely immune in terms of growth — certainly more so than rival event ITU Telecom which had reached a high point in 1999 but would never reach the same levels again. By 2007 the event was attracting over 33,000 visitors and was really up on capacity. Cannes didn’t have its own airport, so everyone had to be bussed, trained or driven in from Nice and Marseilles. There wasn’t enough hotel capacity so the stories multiplied about how far people were travelling in every day — from Antibes, Nice, Monaco etc. Yet it was still fun for many, it was a size of event that suited many, it still accorded the opportunity for buyers (operators) and vendors to get together and do business. It wasn’t the event you see today, it was less diffuse. However, the GSMA thought that for the show to remain relevant it would have to grow to reflect the growth of the industry, which was no longer just about network equipment providers’ latest base station technology. There was also, to be honest, a feeling from the GSMA highups that they were not being treated with the respect they felt was due to them by the city authorities. So the GSMA made the call to move the event in 2008. In doing so they immediately nearly doubled attendee

numbers — bringing more than 55,000 visitors to the GSMA’s Mobile World Congress in Barcelona. This year, expect double that again. The GSMA also increased its grip on the event, taking control of sales and logistics from Informa which had founded and grown the event on a contract basis, and doing something similar with its media operations. It added Government tracks, entertainment and content tracks, the apps planet, national pavilions, a start-up side conference, courted the device makers, even (like the Edinburgh Festival) attracted its own Fringe; and it struggled to encapsulate all this activity in a series of deliberately meaningful/meaningless slogans: The Edge of Innovation, Mobile is Everything, The Next Element. In time it moved on again to the new Fira where you may very well be reading this article right now. But how many of the 100,000+ now at the event can say they are having as much fun as the few thousand that gathered on the Croisette every year to toast the industry’s remarkable mix of luck, ingenuity and hard work. And how many, if any, of those 100,000 were present, 30 years ago, in a small conference room in London contemplating the birth of something three letters long that has changed our world: GSM.



2008 With thanks to journalist Ian Channing whose knowledge provided much of the information for this article. Any mistakes or wrong opinions are entirely TMN’s.




Seven things I know about…


With 5G, the industry is targeting the ability to connect 10-100 times more devices with 10-100 times higher data rates. That requires an increase in overall capacity of 1000x or more. There is also the desire to be able to meet an increasingly diverse set of use cases, from mobile devices moving around in dense urban areas to deployments of IoT devices such as sensors and connected cars. Although a number of enabling technologies will be deployed, Massive MIMO will play an important part because of the gains it achieves in spectral efficiencies via spatial multiplexing and beamforming. By deploying hundreds of elements that can work together to direct beams to multiple individual users in the same frequency, Massive MIMO antennas have the ability to achieve an order of magnitude increase in capacities and the number of connected devices. For throughput and capacity growth and for the device density that IoT brings, Massive MIMO is going to be key.

3 Gregory J. Skidmore, Director of Propagation Software, Remcom, on how a new approach to Massive MIMO channel modelling will be key to success of 5G network rollouts and applications.

5G is closer than you may think Although 5G standards are still being set and full ITU adoption of IMT-2020 standards is not scheduled until 2020, the process is further along than that timescale indicates. 3GPP is targeting the first freeze of New Radio standards later this year, and there has been a huge amount of R&D in the area of new radio systems. Much of that R&D has been in the study of the propagation characteristics of radio technologies that will make up the 5G air interface, and the study of Massive MIMO in mmWave bands has formed a major part of that drive. 38 TMNQUARTERLY

Massive MIMO will be massive for 5G

But Massive MIMO also introduces more complexity

By using beamforming to direct signals in specific directions rather than radiating over an entire cell, Massive MIMO increases spectral efficiency and reduces interference, but it also introduces increased complexity. Small cell densification entails base stations increasingly being placed within complex urban environments that are rich with multipath, with signals reflecting and diffracting from a number of surfaces. Beamforming requires characterisation of the channel between each antenna at each end of the communications link, requiring substantially more channel state information as the number of antennas grows. And while Massive MIMO can be used effectively at any frequency, 5G brings new millimeter wave bands whose small wavelengths will make it easier for increasing numbers of antenna elements to be sited on one device. Thus, predicting how transmitted and received signals vary across closely-spaced antennas requires much more detailed simulation. To maintain links to users in a small cell, a base station is continually updating information about the channel and calculating the best way to get a signal to each user. MIMO beamforming relies on “pilot signals,” which enable a base station to characterise a channel and use its large antenna arrays to direct a beam to a single user. Pilot signals from neighbouring cells can introduce interference in the network, or pilot contamination, which can greatly reduce the performance of Massive MIMO beamforming.


Accurate channel modelling is critical to understanding that complexity Channel modelling enables you to predict or understand the propagation environment between different devices. To understand how to assess the performance of Massive MIMO, operators and vendors will require very accurate modelling of how signals will reach users as they move around an environment, including complex factors such as high levels of multipath, signal and antenna polarisations, and details of the phase across an array of antennas. Predicting what signal is received in each use case is then used to assess the effectiveness of beamforming. You need to simulate beamforming taking in data that defines building materials, terrain, vegetation and other structures so you can simulate those different variables in high detail.

Current modelling techniques will not cut it In the past, operators and system designers may have got by with simplistic statistical or 2D channel models but for Massive MIMO we are now talking about a very detailed model to determine where and how signals are propagating, and how to deliver an actual beam to an individual device. Looking for multipath can get very complex. In an urban area you may have a line-of-sight to a device with a direct path and also lots of other paths that signals will travel to get from one to another, each presenting different copies of the signal. In addition, the phase of each path and its variation between each element of the MIMO system is also very important to understanding the performance of a MIMO system. To meet this need, 3D ray tracing uses simulations to predict all of these effects in order to understand what channels look like for different scenarios.

Simulating actual 3D beamforming requires high detail Massive MIMO predictive channel modelling requires a new level of detail, with a full 3D ray tracing model that captures all the 3D effects that are going on. However, out-of-the-box, most ray tracing models are not up to the task of computing multipath in detail for Massive MIMO, as it requires a simulation for each transmitting antenna in the array, which becomes too computationally intensive in the case of massive MIMO. Most solutions try to handle this by making simplifying assumptions, while we instead try to keep the details but optimise calculations to minimise their impact to the run times.

How Remcom meets that need Remcom was first to market with MIMO channel modelling. In terms of its ability to model massive MIMO, our solution is unique and out ahead of the market. It includes the full accuracy of its 3D ray tracing capabilities, but with advanced optimisations that are ready to run out-of-the-box. With simulation, it is generally accepted that there is a trade-off between speed and accuracy. With our approach to MIMO, you do not need to compromise — you can get accurate results without sacrificing performance. Operators and equipment manufacturers moving their 5G trials forward can now do so with a modelling tool that will quickly and accurately enable predictive modelling of a key 5G technology enabler.

“Massive MIMO predictive channel modelling requires a new level of detail.”

Remcom has been providing its EM simulation and channel modelling expertise for more than 20 years. Its Wireless InSite MIMO software predicts accurate path data between each transmitting and receiving element with precision and reveals key channel characteristics in a timely manner using optimisations that minimise runtime and memory constraints. Read more: Simulation of Beamforming by Massive MIMO Antennas.



PUBLIC SAFETY OVER LTE Public LTE networks could be used to carry emergency service comms. The equipment vendors are in favour, but the path forward is not always simple.


Most public safety radio networks are on dedicated, private digital networks. This gives control of the network, good security and guaranteed access and availability, but is expensive, often has limited data usage, and often creates interoperability issues between agencies. The public safety community has long called for mobile broadband to support its various missions, and the response has been to propose a more ruggedised version of LTE to meet that demand. LTE as a technology is often compared to existing narrowband professional mobile radio (PMR) networks such as TETRA. The oldest TETRA networks were launched almost 20 years ago, so many TETRA services have been developed and optimised over the years to perfectly suit the voice communication needs of first responders. The driver for LTE-based critical communications is naturally the need to meet the growing demand for real-time video and data services. In fact, LTE’s advocates claim that it significantly exceeds TETRA in coverage, capacity and throughput performance including advanced

prioritisation support while providing the same level of service availability and security. They claim that with the adoption of LTE mobile broadband technology, public safety networks can benefit from the advantages of fast and reliable broadband data and realtime video services, opening up new communications possibilities for rescue missions and disaster recovery situations. The leading vendor in promoting LTE for Emergency Services use has been Nokia. Nokia’s Joerg Ambrozy, writing in a company blog in late 2016, said, “Globally, spectrum harmonisation for broadband public protection and disaster relief (BB-PPDR) is evolving naturally around commercially healthy ecosystems such as band 28 (700MHz) devices and infrastructure. However, the progress in Europe is challenging as there is a lack of unified European wide spectrum allocated for critical communications. The Electronic Communications Committee (ECC) report 218 includes multiple frequency options, some of them without any product support or even without any standards and leaves freedom for each CEPT country to make national decisions. The consequence is the risk of a highly fragmented market. Europe would benefit from accelerated LTE adoption for critical communications as well as true economies of scale by agreeing on common spectrum approach based on a commercially available ecosystem. Representatives of different countries and national organisations are getting together under the Broadmap project to prepare a common ground for a new ecosystem that consists of applications, services, networks and devices which enable the benefits of broadband for public safety and security communities. This is a vendor agnostic procurement initiative and aims at assessing end user requirements. There are other industry initiatives as well, like SALUS for security and interoperability in next generation PPDR communication infrastructures,


and Nokia recently established the Mission-Critical Communications Alliance, a forum for vendors, operators, governmental representatives and the critical communications users.” The first example of how LTE is gathering momentum in this industry comes from Korea, where the world’s first LTE for railways network was implemented using Nokia technology. This deployment is leveraging the same dedicated 700MHz spectrum as the national public safety network in the country, showing how LTE helps different critical communications agencies leverage economies of scale with a common technology infrastructure. In Japan the Mobile Radio Center Foundation (MRC), a provider of private mobile radio services, is carrying out extensive field tests with Nokia’s compact LTE solution and push-to-talk (PTT) service over LTE. This opens the door for future nation-wide deployment of mission-critical LTE network and services. Public safety customers require a variety of solutions, ranging from robust broadband infrastructure to rapidly deployable systems for disaster recovery and temporary coverage.

In 2016, Nokia formed the Mission Critical Communications Alliance, a collaboration of operators, public authorities and first response agencies to formalise standards in the use of LTE for public safety. Network enablers include small cells, edge computing and rapidly deployable “network in a box/backpack” solutions. Release 13 of 3GPP specifies mission critical push-to-talk (MCPTT)  — growing the range of available services.

Local command centre

Company LTE solutions

High availability & resiliency

Central command centre


Do customers notice your network investments? Deliver the best QoE for your money Mobile customers measure service based on video start times, stall times and more. With Vasona Network’s edge computing solutions (MEC), provide great video and responsive services people want. Better RAN capital efficiency. Better Quality of Experience.

Visit us at MWC – Hall 6, Stand 6L41



Another market advancing towards LTE is the UK, where a tender was held to provide a new network to replace the current, TETRA-based emergency radio service operated by the Motorolaowned Airwave. EE won the tender to provide the network, using parts of its existing LTE network to provide coverage. Setting up the new network would cost around £1.2bn to March 2020, which would be a significant saving on the Airwave system. Public organisations currently use 328,000 Airwave devices, but those used by ESN will cost an estimated £500 less per device every year. However, although the contract has been awarded, it remains politically controversial. A UK body that monitors public spending, the National Audit Office (NAO) has warned that the emergency services plan to use LTE in life-and-death is “at high risk of failing and may not be suitable for covert and anti-terror operations”. The NAO said that the system is “inherently high risk” because it has never been implemented anywhere before, is not being overseen by senior civil servants and is being pushed through too quickly to allow for teething problems. The report said emergency services and other users of Airwave were concerned that ESN will not replicate all of Airwave’s functions. The current communication service, Airwave, has averaged 99.9% availability since April 2010 but costs £1,300 per handheld or vehicle-mounted device per year, and its data capabilities are poor. In addition, a deteriorating commercial relationship with Airwave after 2010 meant that the government did not believe an extension or re-procurement would offer value for money. International comparison work, commissioned by the NAO, has concluded that the proposed ESN solution is the most advanced in the world, with only one other country —

DURING THE 2016 NEW YORK MARATHON, PARALLEL WIRELESS AND PARTNERS INSTALLED A FIRSTNET-COMPATIBLE BAND 14 (700MHZ) DEMONSTRATION NETWORK COVERING CENTRAL PARK. The network consisted of Parallel Wireless’ converged small cell nodes and gateway elements, with Sierra Wireless providing routers for Wi-Fi and wired access with backhaul via Parallel Wireless’ Band 14 network. The end users (safety officials and race officials) communicated by voice, accessed data, and monitored video on ruggedised Sonim devices. Parallel Wireless said it demonstrated the following • O  fficials could monitor real-time video surveillance cameras from handheld devices rather than just via monitors at tent locations • V  ideo upload and download in order to measure the effects that live video had on the network. With the maximum video stream upload of 1MB compressed video, the throughput from each security camera was sufficient to the end user using Band 14’s 10MHz upload channel • T  ested the Band 14 network amid an urban scene clogged with RF signals that might cause interference, a network gateway mitigated interface and prioritised video traffic • Enabled IoT for public safety by connecting not only end user devices, but also a network of surveillance cameras in an interoperable Band 14 network

South Korea — Korea — seeking to deploy a similar solution. There are significant technical challenges that the programme needs to overcome including working with EE to increase the coverage and resilience of its 4G network so that it at least matches Airwave and developing handheld and vehicle mounted devices as no devices currently exist that would work on ESN. EE insists it is making the investments required to extend and deepen LTE coverage where necessary.

Are public LTE networks the right place for emergency services communications to reside? Join the conversation

#TMNtalkingpoint Contact:



SLOW GOING IN VoLTE ROAMING Lots of VoLTE, very little VoLTE roaming so far. Why?

According to figures released in late 2016 by the GSA, an industry association funded by equipment vendors to promote their technology, there were at that point around 158 operators in 72 countries investing in VoLTE deployments, studies or trials, with 93 operators having commercially launched HD voice service using VoLTE in 52 countries. Nearly a hundred live VoLTE networks — that’s a lot of VoLTE action. Yet despite all that activity, you can count the international VoLTE roaming deals on your fingers. Those offering VoLTE roaming tend to be those with the most advanced VoLTE networks, and/or are most likely to have customers that cannot fall back to 3G coverage when travelling. Verizon and Japanese operator KDDi have an agreement, as do NTT DoCoMo and KT in South Korea. Korean operator LG U+ has an agreement with Hong Kong’s SmarTone. Beyond this, there is precious little commercial action. It’s hard to put this down to a lack of will. The expressed intention of many operators is to get VoLTE roaming up and running. Conducted in January


2016, an Ovum survey revealed that approximately 80% of respondents expected to have launched domestic VoLTE within the next 12 months, and nearly half expect to have launched international VoLTE interconnection and roaming in the same timeframe. At the time, operators said that VoLTE offered the best quality service for highARPU customers at home and abroad, strengthened the operators’ competitive position versus OTT providers, and maintained roaming profitability. However, there certainly were not a bunch of VoLTE roaming launches in 2016, so what is causing the delay? Perhaps one answer is in another finding of the survey — significant challenges regarding the selection of VoLTE roaming model, with the majority of operators admitting that they didn’t know whether they would implement S8HR (S8 Home Routing) or LBO (Local Break Out) models. So what’s the difference between those models, and why is it a decision that operators relate to their commercial business models? Put simply, S8HR, uses the LTE S8 interface for transporting VoLTE call control between the visited and home network as data traffic. It effectively leaves control of the call back in the home network IMS and core elements. This routing can be done through interconnect providers that provide


both the control plane and signalling interconnect and the actual data plane connection. LBO requires local IMS elements, operated by the visited network operator, to provide a direct connection to local services, thereby avoiding the “roundtrip” aspect of media connectivity. It has also been standardised into LTE roaming standards, with Policy to Policy connections on an interface known as S9, rather than the S8 interface that supports home routing. Initial implementations of VoLTE Roaming have all been based on the S8HR — mainly because it is technically simpler, does not require IMS in the visited LTE network and therefore IMS interoperability testing is not required between the home network and each visited network. This significantly simplifies implementation. However, there are limitations to the model. There is no built-in support for features such as lawful intercept, emergency calling or for the technology that provides call continuity when a user moves from 4G to 3G coverage — known as SRVCC. Additionally, S8HR introduces uncertainty in the billing and charging model for operators. As it apes the data roaming interconnect model, operators can only rate on a “per kb” consumed basis by the voice call. One Tier 1 European operator said to TMN, “Outbound roamers will be retail charged in minutes but we’ll be settling wholesale in kilobytes. Also all calls from Outbound Roamers appear to originate from our IMS so we’ll have outbound interconnect costs for customers that call the country they’re in. It can all be done, though.” Meanwhile, at the retail level, LBO

“There certainly were not a bunch of VoLTE roaming launches in 2016, so what is causing the delay?”

allows for consistent charging for customers on all technologies while S8HR will require enhancements from both IMS and EPC networks to allow this consistent charging. (There is a third option, which is a mix of both, with S8HR for inbound and LBO outbound. But that then introduces an interworking requirement between inbound/outbound.) This greater capability of LBO is why it was initially the preferred model envisaged by the GSMA for VoLTE roaming, and it is still one preferred by some operators. One operator confirmed to TMN that it had to choose S8HR first because a major provider of inbound roamers had asked it to do so. But the operator would still prefer to transition to LBO in the medium term. That’s because of its greater feature support and better regard for quality of customer experience. Take this example in the S8HR model, where a call drops due to lack of SRVCC support: “For example we charge a minimum call charge of 1 minute for outbound roamers even if the call only lasted 2 seconds. So all calls dropping due to the lack of SRVCC will result in the customer having to redial on CS and incurring another 1 minute charge. The mitigation is to only partner with Networks that have very good 4G coverage.” Another operator TMN spoke to has a similar, in fact if anything stronger, view. It said, “Only LBO (Home routed or visited routed) will allow us to meet regulatory, business and customer experience expectations. Indeed, S8HR will add significant network costs to allow for retail charging, lawful interception, emergency calls and Visited Radio network protection. In addition, it will not provide customers with the same level of service they have today.” It also has doubts about the charging in S8HR. It said, “With

LBO, the charging will basically be the same as the current circuit switched voice services while with S8HR it will probably move to data-based charging, resulting in a significant decrease of business value without changing current charging principles.” In the end, you may ask, do these technical choices really matter? Certainly IPX providers such as Syniverse, or iBasis, are sure they can provide their services and signalling support for policy, charging etc under either model — so it’s not about things working or not working. But it does matter in terms of timelines to rollout, and how operators will charge for VoLTE and settle with each other. And it matters in the long term — as we see more 2G and 3G switch off, and more devices capable of supporting VoLTE, then it will be essential. Here’s why it matters most of all. Voice is a critical service for carriers — absolutely key to consumer perception of quality. Dropped calls, poor quality calls, are bad. That is, in fact, even more the case when a customer is travelling. VoIP and VoWiFi apps are a major threat to this aspect of the carrier-customer relationship. International travellers are often high value customers. Are carriers going to let those customers down when they travel, or make them wait more years for easy, cheap, high quality international voice calls? If they do, they might just find that when they finally have the architecture sorted, the customers have moved on.




Does the assisted or self-driving, connected, car need the cellular network, or vice versa? There’s a temptation in the mobile network industry to assume that networks will be central to self-driving and assisted-driving vehicles, but despite the promises made for 5G, this is far from certain. If you look at Tesla’s most recent demonstration of its self-driving car, the technology is all provided by on-board sensors, radar and cameras — there’s no link back to a network element. That’s not to say that the experience couldn’t be enhanced or assisted by cellular networks, but we should remember that the self-driving car can exist independently of the network. While many current autonomous driving platforms rely on radar, LiDAR and camera arrays to effectively “see,” the network could provide a boost in that it provides 360-degree awareness even in non-line-of-sight conditions like inclement weather or blind intersections. The mobile industry has been keen to talk up the potential of connected and automous vehicles as a use case for 5G, but until recently there were few signs that the automotive industry really knew what it wanted in return from the mobile network. Initiatives, such as they existed, were on a unilateral basis between this car manufacturer and that network equipment vendor. Operators themselves have been notably absent from any trials, save for Deutsche Telekom which has run trials on a test stretch of Autobahn with Nokia, and more recently from China Mobile and NTT DoCoMo which have tested


the capability of MU-MIMO to provide broadband to cars moving at high speed on a race track. It was only in the latter half of 2016 that we saw Audi, BMW, Daimler, Ericsson, Huawei, Intel, Nokia and Qualcomm announce the formation of the “5G Automotive Association”. The association has been formed to “develop, test and promote communications solutions, support standardisation and accelerate commercial availability and global market penetration.” The goal is to address society’s connected mobility and road safety needs with applications such as connected automated driving, ubiquitous access to services and integration into smart cities and intelligent transportation. With 5G and continued strong LTE evolution, which includes Cellular Vehicle-toeverything (C-V2X) communication, the focus of the Alliance is to try to bring together a more common group of requirements from the automotive players, and to agree on network technology selection. Also in mid-2016, BMW Group, Intel, and Mobileye agreed a partnership to make self-driving vehicles and future mobility concepts a reality, with the aim of bringing fully automated cars into production by 2021. The companies have agreed to a set of milestones to deliver fully autonomous cars based on a common reference architecture. In the shortterm, the companies will demonstrate an autonomous test drive with a highly

automated driving prototype. In 2017, the platform will extend to fleets with extended autonomous test drives.

EXPLAINING V2X Vehicle to vehicle, vehicle to infrastructure and vehicle to cellular connectivity is known under the umbrella term V2X. In 3GPP Release 14, V2X will deliver enhancements to the LTE Direct device-to-device design for proximal direct communications, addressing the unique nature of low-latency, high-speed and high-density vehicle communications. V2X delivers improvements over DSRC (Dedicated Short Range Communications), adding a few additional seconds of alert warning for a safer and more aware driving experience. 3GPP Release 14 also defines C-V2X Network Communications, which uses the existing infrastructure and ubiquitous coverage of LTE networks (including LTE Broadcast and unicast) to offer additional applications and services. Although there is a competing standard for V2X Connectivity,


known as ITS and based on the use of unlicensed spectrum, the 5GAA is firmly on the side of the cellular operators and licensed spectrum standards in LTE and 5G. The 5GAA says, “Given that a future C-V2X system will be able to address these modes — and may be able to do so concurrently — and that combinations of these modes can lead to truly transformational transportation applications, the 5GAA believes it is important to carefully consider C-V2X in tandem with spectrum which may be used. “It is important to recognize that in the device-to-device mode (V2V, V2I, V2P) operation, C-V2X does not necessarily require any network infrastructure. The 5GAA therefore firmly believes that the device-todevice modes must be enabled and not prohibited by regulatory frameworks because such modes could also be operated in the 5.9 GHz ITS band without any required subscription or payment.” To be clear, C-V2X would also support V2N applications delivered over traditional, commercially licensed cellular spectrum and would utilise existing cellular networks where other voice and data communications occur. V2N would deliver network assistance and commercial services requiring the involvement of a Mobile Network Operator (MNO), providing the access to cloud-based data or information being relayed through the cellular network by means of network slicing architecture for vertical industries. The use of the MNO infrastructure will inherently enhance the C-V2X with the data security and privacy of mobile networks and can provide key time critical network services by means of edge computing. Collectively, the transmission modes of shorter-range direct communications (V2V, V2I, V2P) and longer-range

network-based communications (V2N) comprise what is known as CellularV2X3. Since V2X must be deployable in the near term and extended to the future, it must offer the necessary high performance to meet use cases of today, while being futureproof, scalable and be based on an interoperable platform and capable of meeting requirements of use cases of tomorrow. Indeed, tomorrow’s use cases are important and challenging: V2X will grow to include enhanced concepts in Advanced Driver Assistance Systems (ADAS) where vehicles can cooperate, coordinate and share sensed information, and ultimately V2X will grow into CAD (Connected Automated Driving). As soon as 5G and its corresponding highly reliable, low-latency mission critical services are available for V2X applications, ADAS and CAD will be significantly enhanced. The inclusion of the mobile network operator for V2N allows the utilisation of spectrum outside ITS.






Cellular-V2X (C-V2X) as initially defined as LTE V2X in 3GPP Release 14 and is designed to operate in several modes:

DEVICE-TO-DEVICE This is Vehicle-to-Vehicle (V2V), Vehicle-to(Roadway) Infrastructure (V2I) and Vehicleto-Pedestrian (V2P) direct communication without necessarily relying on network involvement for scheduling. This mode is analogous to the ad hoc communications paradigm used in 802.11p.

DEVICE-TO-NETWORK This is the V2N solution using traditional cellular links to enable cloud services to be part and parcel of the end-to-end solution.

DEVICE-TO-CELL TOWER This is another V2I communications link which enables network resources and scheduling and utilises existing operator infrastructure. Device-to-cell tower communications constitute at least part of the V2I proposition and is important to end-to-end solutions.





Japan led the world in mobile, but its early innovation proved to be largely unreplicable. Now, with LTE Advanced networks deployed nationwide, 5G on the horizon and Apple dominating the smartphone market, Japan is still leading. But it looks a lot more like the rest of the world. 50 TMNQUARTERLY

A host of reasons made Japan a market on its own - historically low churn rates, different network standards, different spectrum bands, its own device ecosystem, own data protocols. Although a very successful mobile market it was not one the rest of the world could replicate. The market was famed for being far ahead in adoption of mobile services such as gaming and payments and direct carrier billing for digital and physical goods, and the use of mobile to access walled garden internet-based services. Paying for goods such as fashion items, transport tickets, and groceries via carrier billing is relatively normal in Japan. LTE launches in 2010 made it the first major market to launch LTE, and gave it a level of interoperability with the world it had not previously had. The arrival of Apple devices on networks eroded the “Galapagos” effect still further, with the Japanese name brands — Sony, Sharp, NEC — that had

been reliant on their relationships with major operators being given far more competition within carrier ranges. Japan continues to be an early adopter of network technology. Japanese networks have introduced many LTE features in advance of other countries, including coordinated C-RAN, small cells, carrier aggregation and VoLTE. Softbank is introducing LTE-Broadcast and LoRaWAN IoT networks. The operators have to deal with incredible density and capacity demand in their city areas, whilst balancing remote connectivity. Mobile is also a critical enabler for earthquake and tsunami warnings. LTE innovation has shifted to early work on control and user plane separation, stateless operation for SDNs and virtualisation of the core network. NTT and Ericsson completed a joint Proof of Concept (PoC) of dynamic network slicing technology for 5G core networks in June 2016.






NTT launches carphone

NTT starts cellular

Kansai launches

NTT DoCoMo launches

service in Tokyo

telephone service

TACS system

800MHz PDC




NTT DoCoMo launches

DDI Cellular Group starts

NTT DoCoMo launches 1.5GHz PDc / IDO starts

i-mode data services

“cdmaOne” service

services based TACS system / Digital TU-KA Kyushu 1.5GHz band PDC services




DDI and IDO close

NTT DoCoMO launches 3G services on W-CDMA /

KDDI and Okinawa

TACS services

KDDI merges with au / J-Phone formed from series

Cellular start

of mergers of regional companies

CDMA2000 1x services





Vodafone joins Softbank

E-mobile enters


J-Phone starts 3G

Group and launches “3G high

market with license

provides “i-mode

services using W-CDMA

speed” / KDDI and Okinawa

in 1.7GHz band

FeliCa” data service

and international roaming

Cellular launch EV-DO Rev.A



NTT DoCOMo launches LTE network, with 75Gbps peak rates, under Xi brand


Softbank launches 110Mbps LTE

Softbank completes acquisition of US

network / e Access launches LTE/

carrier Sprint / NTT DoCoMo rolls

KDDi launches LTE services / NTT

out 150-Mbps Xi LTE service / KDDi

DoCoMo closes PDC network

enables LTE international roaming




Softbank buys UK

NTT DoCoMo begins providing LTE-

NTT DoCoMo and KDDi

processor giant ARM

Advanced services under the name

both launch VoLTE

“PREMIUM 4G” with a maximum

(Voice over LTE) service

downlink of 225 Mbps TMNQUARTERLY 51


In the PoC, a slice-management function and network slices based on service requirements were autonomously created, enabling widely varying services to be delivered simultaneously via multiple logical networks. The PoC shows how 5G services could be connected flexibly between networks according to set policies in order to meet specific service requirements for latency, security or capacity. Existing networks cannot fully optimise themselves on a per-service basis, and thereby require significant improvement to ensure simultaneous delivery of the versatile services enabled by 5G. DoCoMo designed the network slice creation and selection functions, and Ericsson developed technologies to perform network slice lifecycle and service management. Ericsson implemented the platform for governance of services and network slices. Alongside this has been a key thrust into 5G research. KDDI and Softbank are both targeting the 2020 Olympic games as a key date to have viable, commercial, 5G services up and running. DoCoMo

in particular has been public in its encouragement for vendor R&D in 5G, naming the vendors it will work with. Softbank has played catch-up of late, trialling Massive MIMO in its live TDD LTE network, for example. And KDDi has also said that it will look at beam forming and MU MIMO to launch a fixed wireless service using 5G wireless. From mid 2017 NTT DoCoMo will commence delivery of trial environments for 5G mobile communications in Japan, working in conjunction with various partners in industries such as automobiles, railways and broadcasting. The 5G Trial Sites, which will enable customers to experience services and content leveraging 5G technology, will be the latest step toward a commercial 5G system that DoCoMo expects to launch in 2020. The initial 5G Trial Sites will be offered mainly in two districts of Tokyo, the Odaiba waterfront and Tokyo SKYTREE TOWN. The sites will use Ericsson’s 5G New Radio technology, and the 5G client leadership of Intel. For communications, DoCoMo plans to utilise the 28GHz frequency

band, one of the candidate bands that the Ministry of Internal Affairs and Communications is considering to designate for commercial 5G networks in Japan. Another trial, this with Samsung Electronics, successfully achieved a data speed of more than 2.5Gbps with a mobile device that was in a vehicle travelling 150km/h, thereby verifying the feasibility of stable connectivity for 5G mobile devices in fast-moving trains. The trial took place on November 7 in Fuji Speedway, a motorsport racing circuit in Shizuoka Prefecture, Japan. Transmissions were conducted using the 28GHz high-frequency band. To date, no test had achieved a successful wireless data transmission to a fast-moving device due to the large path-loss of high-frequency radio signals. In this trial, however, the problem was overcome with massive multiple-input multiple-output (MIMO) technologies that incorporate beamforming, which concentrates radio waves in a specific direction, and beam tracking, which adjusts the beam according to the fastmoving mobile device’s location. In a separate undertaking, DoCoMo








145 million



conducted an outdoor datatransmission trial with Huawei from October 3 to 26. It was carried out in a field measuring 100,000 square meters, equivalent to 12 soccer pitches, in the Minato Mirai 21 waterfront of Yokohama. The trial involved 23 simultaneously connected mobile devices and achieved a cumulative 11.29Gbps of data throughput and latency below 0.5 milliseconds using the 4.5GHz frequency band. The trial combined multi-user MIMO (MU-MIMO) technology for simultaneous multiple access and a precoding algorithm that optimises signals for maximised performance and also limits interuser interference. It achieved a MUMIMO transmission of a maximum 79.82bps/Hz/cell, which was 1.8 times more efficient than an outdoor trial conducted in China in November 2015. This range of 5G activity illustrates that Japan continues to lead wireless R&D. Now, however, that research is of truly global significance.


Provides communications solutions for a wide variety of applications. At MWC will show an LTE solution for WISP and public safety markets.



Provides 3G/4G/Pre-5G RAN and core network node testing. Its highdensity load testers have dominated the Japanese market since they were selected for the first ever LTE trial in Japan.


The world’s fifth largest IT service provider, Fujitsu is a key partner to many major NEPs and mobile operators globally. Most recently the company has been a member of various 5G and NFV research programmes, providing its knowledge of virtualisation platforms.

NEC has been best known for its closeness to Japanese operators, providing the end-to-end network equipment for the country’s mobile expansion. The vendor is still highly active in microwave backhaul, small cells and in 5G research areas such as MIMO. More recently the company’s acquisition of software company Netcracker has given it an OSS and network orchestration capability.


Nihon Dengyo Kosaku, known as DENGYO, specialises in manufacturing antennas and filters, including mobile base station antennas and indoor antennas. Filter products include band pass filters, multiplexers, antenna combiners and directional couplers.





Mobile has become fundamental to our everyday lives. It has inextricably changed how we communicate, interact, work and play as individuals. It is transforming entire industries, bringing new levels of productivity and efficiency to enterprises. Over three decades, mobile has evolved from an emerging communications technology to a phenomenon that is now at the foundation of everything we do. How can we describe the role of mobile in today’s world?

Elemental. Mobile is revolutionary, dynamic, ever adapting. It’s the force behind every emerging innovation, every forward-thinking enterprise. Join us in Barcelona for Mobile World Congress 2017, where the world comes together to showcase, celebrate and advance mobile.



Find faults wherever they’re hiding.

Because “good enough” mobile network testing just isn’t good enough anymore. As you transform your next-gen networks, you need reliable test data and clear insights into their performance so you can find and fix faults before subscribers are impacted. Let us help you see what your competitors don’t.

We’re the global network test, data and analytics experts, and we’ve got your success in sight. Meet our experts at Mobile World Congress Booth 6K36 Hall 6

TMN Quarterly Issue 17  

The Mobile World Congress Issue: Part II Edge Networks, IoT, Network Automation, LTE for Emergency Services

TMN Quarterly Issue 17  

The Mobile World Congress Issue: Part II Edge Networks, IoT, Network Automation, LTE for Emergency Services